modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
BigSalmon/T5F | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 6 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ririying/mt5-small-finetuned-mt5-class1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ririying/mt5-small-finetuned-mt5-class1
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.0908
- Validation Loss: 1.7689
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 71320, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.8999 | 2.2395 | 0 |
| 2.6457 | 1.9951 | 1 |
| 2.3865 | 1.8784 | 2 |
| 2.2622 | 1.8179 | 3 |
| 2.1877 | 1.7959 | 4 |
| 2.1395 | 1.7820 | 5 |
| 2.1085 | 1.7720 | 6 |
| 2.0908 | 1.7689 | 7 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
BigTooth/DialoGPT-small-tohru | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | 2022-11-30T09:37:51Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 45 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 45,
"warmup_steps": 5,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
BillelBenoudjit/jplu-wikiann | [
"fr",
"dataset:wikiann",
"model-index"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
language: en
datasets:
- Jzuluaga/uwb_atcc
tags:
- text
- token-classification
- en-atc
- en
- generated_from_trainer
- bert
- bertraffic
metrics:
- Precision
- Recall
- Accuracy
- F1
- Jaccard Error Rate
widget:
- text: "lining up runway three one csa five bravo easy five three kilo romeo contact ruzyne ground one two one decimal nine good bye"
- text: "csa seven three two zero so change of taxi quality eight nine sierra we need to full length britair five nine zero bravo contact ruzyne ground one two one decimal nine good bye"
- text: "swiss four six one foxtrot line up runway three one and wait one two one nine csa four yankee alfa"
- text: "tower klm five five tango ils three one wizz air four papa uniform tower roger"
model-index:
- name: bert-base-token-classification-for-atc-en-uwb-atcc
results:
- task:
type: token-classification
name: chunking
dataset:
type: Jzuluaga/uwb_atcc
name: UWB-ATCC corpus (Air Traffic Control Communications)
config: test
split: test
metrics:
- type: F1
value: 0.87
name: TEST F1 (macro)
verified: False
- type: Accuracy
value: 0.91
name: TEST Accuracy
verified: False
- type: Precision
value: 0.86
name: TEST Precision (macro)
verified: False
- type: Recall
value: 0.88
name: TEST Recall (macro)
verified: False
- type: Jaccard Error Rate
value: 0.169
name: TEST Jaccard Error Rate
verified: False
---
# bert-base-token-classification-for-atc-en-uwb-atcc
This model allow to detect speaker roles and speaker changes based on text. Normally, this task is done on the acoustic level. However, we propose to perform this task on the text level.
We solve this challenge by performing speaker role and change detection with a BERT model. We fine-tune it on the chunking task (token-classification).
For instance:
- Speaker 1: **lufthansa six two nine charlie tango report when established**
- Speaker 2: **report when established lufthansa six two nine charlie tango**
Based on that, could you tell the speaker role? Is it speaker 1 air traffic controller or pilot?
Also, if you have a recording with 2 or more speakers, like this:
- Recording with 2 or more segments: **report when established lufthansa six two nine charlie tango lufthansa six two nine charlie tango report when established**
could you tell when the first speaker ends and when the second starts? This is basically diarization plus speaker role detection.
Check the inference API (there are3 examples)!
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [UWB-ATCC corpus](https://huggingface.co/datasets/Jzuluaga/uwb_atcc).
<a href="https://github.com/idiap/bert-text-diarization-atc">
<img alt="GitHub" src="https://img.shields.io/badge/GitHub-Open%20source-green\">
</a>
It achieves the following results on the evaluation set:
- Loss: 0.0098
- Precision: 0.9760
- Recall: 0.9741
- F1: 0.9750
- Accuracy: 0.9965
Paper: [BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications](https://arxiv.org/abs/2110.05781).
Authors: Juan Zuluaga-Gomez, Seyyed Saeed Sarfjoo, Amrutha Prasad, Iuliia Nigmatulina, Petr Motlicek, Karel Ondrej, Oliver Ohneiser, Hartmut Helmke
Abstract: Automatic speech recognition (ASR) allows transcribing the communications between air traffic controllers (ATCOs) and aircraft pilots. The transcriptions are used later to extract ATC named entities, e.g., aircraft callsigns. One common challenge is speech activity detection (SAD) and speaker diarization (SD). In the failure condition, two or more segments remain in the same recording, jeopardizing the overall performance. We propose a system that combines SAD and a BERT model to perform speaker change detection and speaker role detection (SRD) by chunking ASR transcripts, i.e., SD with a defined number of speakers together with SRD. The proposed model is evaluated on real-life public ATC databases. Our BERT SD model baseline reaches up to 10% and 20% token-based Jaccard error rate (JER) in public and private ATC databases. We also achieved relative improvements of 32% and 7.7% in JERs and SD error rate (DER), respectively, compared to VBx, a well-known SD system.
Code — GitHub repository: https://github.com/idiap/bert-text-diarization-atc
## Intended uses & limitations
This model was fine-tuned on air traffic control data. We don't expect that it keeps the same performance on some others datasets where BERT was pre-trained or fine-tuned.
## Training and evaluation data
See Table 3 (page 5) in our paper:[BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications](https://arxiv.org/abs/2110.05781).. We described there the data used to fine-tune or model for speaker role and speaker change detection.
- We use the UWB-ATCC corpus to fine-tune this model. You can download the raw data here: https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0
- However, do not worry, we have prepared a script in our repository for preparing this databases:
- Dataset preparation folder: https://github.com/idiap/bert-text-diarization-atc/tree/main/data/databases/uwb_atcc
- Prepare the data: https://github.com/idiap/bert-text-diarization-atc/blob/main/data/databases/uwb_atcc/data_prepare_uwb_atcc_corpus.sh
- Get the data in the format required by HuggingFace: https://github.com/idiap/bert-text-diarization-atc/blob/main/data/databases/uwb_atcc/exp_prepare_uwb_atcc_corpus.sh
## Writing your own inference script
The snippet of code:
```python
from transformers import pipeline, AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jzuluaga/bert-base-token-classification-for-atc-en-uwb-atcc")
model = AutoModelForTokenClassification.from_pretrained("Jzuluaga/bert-base-token-classification-for-atc-en-uwb-atcc")
##### Process text sample (from UWB-ATCC)
from transformers import pipeline
nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
nlp("lining up runway three one csa five bravo b easy five three kilo romeo contact ruzyne ground one two one decimal nine good bye)
[{'entity_group': 'pilot',
'score': 0.99991554,
'word': 'lining up runway three one csa five bravo b', 'start': 0, 'end': 43
},
{'entity_group': 'atco',
'score': 0.99994576,
'word': 'easy five three kilo romeo contact ruzyne ground one two one decimal nine good bye', 'start': 44, 'end': 126
}]
```
# Cite us
If you use this code for your research, please cite our paper with:
```
@article{zuluaga2022bertraffic,
title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
```
and,
```
@article{zuluaga2022how,
title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
```
and,
```
@article{zuluaga2022atco2,
title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
journal={arXiv preprint arXiv:2211.04054},
year={2022}
}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.03 | 500 | 0.2282 | 0.6818 | 0.7001 | 0.6908 | 0.9246 |
| 0.3487 | 0.06 | 1000 | 0.1214 | 0.8163 | 0.8024 | 0.8093 | 0.9631 |
| 0.3487 | 0.1 | 1500 | 0.0933 | 0.8496 | 0.8544 | 0.8520 | 0.9722 |
| 0.1124 | 0.13 | 2000 | 0.0693 | 0.8845 | 0.8739 | 0.8791 | 0.9786 |
| 0.1124 | 0.16 | 2500 | 0.0540 | 0.8993 | 0.8911 | 0.8952 | 0.9817 |
| 0.0667 | 0.19 | 3000 | 0.0474 | 0.9058 | 0.8929 | 0.8993 | 0.9857 |
| 0.0667 | 0.23 | 3500 | 0.0418 | 0.9221 | 0.9245 | 0.9233 | 0.9865 |
| 0.0492 | 0.26 | 4000 | 0.0294 | 0.9369 | 0.9415 | 0.9392 | 0.9903 |
| 0.0492 | 0.29 | 4500 | 0.0263 | 0.9512 | 0.9446 | 0.9479 | 0.9911 |
| 0.0372 | 0.32 | 5000 | 0.0223 | 0.9495 | 0.9497 | 0.9496 | 0.9915 |
| 0.0372 | 0.35 | 5500 | 0.0212 | 0.9530 | 0.9514 | 0.9522 | 0.9923 |
| 0.0308 | 0.39 | 6000 | 0.0177 | 0.9585 | 0.9560 | 0.9572 | 0.9933 |
| 0.0308 | 0.42 | 6500 | 0.0169 | 0.9619 | 0.9613 | 0.9616 | 0.9936 |
| 0.0261 | 0.45 | 7000 | 0.0140 | 0.9689 | 0.9662 | 0.9676 | 0.9951 |
| 0.0261 | 0.48 | 7500 | 0.0130 | 0.9652 | 0.9629 | 0.9641 | 0.9945 |
| 0.0214 | 0.51 | 8000 | 0.0127 | 0.9676 | 0.9635 | 0.9656 | 0.9953 |
| 0.0214 | 0.55 | 8500 | 0.0109 | 0.9714 | 0.9708 | 0.9711 | 0.9959 |
| 0.0177 | 0.58 | 9000 | 0.0103 | 0.9740 | 0.9727 | 0.9734 | 0.9961 |
| 0.0177 | 0.61 | 9500 | 0.0101 | 0.9768 | 0.9744 | 0.9756 | 0.9963 |
| 0.0159 | 0.64 | 10000 | 0.0098 | 0.9760 | 0.9741 | 0.9750 | 0.9965 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Biniam/en_ti_translate | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"translation",
"autotrain_compatible"
]
| translation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 197 with parameters:
```
{'batch_size': 13, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 591,
"warmup_steps": 60,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
BinksSachary/DialoGPT-small-shaxx | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 56 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 56,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
BitanBiswas/mbert-bengali-ner-finetuned-ner | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2022-11-30T09:56:54Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute butterflies.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(kzipa/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
Blackmist786/DialoGPt-small-transformers4 | [
"pytorch"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 56 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 56,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Blazeolmo/Scrabunzi | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-30T10:10:10Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2560 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 9.46667947923119e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 5120,
"warmup_steps": 512,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
BlindMan820/Sarcastic-News-Headlines | [
"pytorch",
"distilbert",
"text-classification",
"English",
"dataset:Kaggle Dataset",
"transformers",
"Text",
"Sequence-Classification",
"Sarcasm",
"DistilBert"
]
| text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute butterflies.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(kzipa/sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
Bman/DialoGPT-medium-harrypotter | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-30T10:26:05Z | ---
license: other
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 6.7b-ri-reproduce-combined-4-gpu-20-val
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 6.7b-ri-reproduce-combined-4-gpu-20-val
This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9434
- Accuracy: 0.0329
- Perplexity: 51.5916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-07
- train_batch_size: 1
- eval_batch_size: 8
- seed: 100
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Perplexity |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|
| 2.5731 | 1.0 | 79 | 2.6113 | 0.0317 | 13.6171 |
| 2.206 | 2.0 | 158 | 2.4805 | 0.0328 | 11.9469 |
| 1.9105 | 3.0 | 237 | 2.4512 | 0.0333 | 11.6019 |
| 1.6301 | 4.0 | 316 | 2.5078 | 0.0345 | 12.2780 |
| 1.3733 | 5.0 | 395 | 2.6816 | 0.0342 | 14.6090 |
| 1.1337 | 6.0 | 474 | 3.0078 | 0.0330 | 20.2431 |
| 0.9619 | 7.0 | 553 | 3.1777 | 0.0330 | 23.9923 |
| 0.798 | 8.0 | 632 | 3.2559 | 0.0330 | 25.9419 |
| 0.6653 | 9.0 | 711 | 3.4277 | 0.0331 | 30.8068 |
| 0.552 | 10.0 | 790 | 3.5566 | 0.0333 | 35.0453 |
| 0.4568 | 11.0 | 869 | 3.7324 | 0.0324 | 41.7802 |
| 0.3756 | 12.0 | 948 | 3.8184 | 0.0328 | 45.5295 |
| 0.3119 | 13.0 | 1027 | 3.8477 | 0.0331 | 46.8831 |
| 0.2448 | 14.0 | 1106 | 3.9062 | 0.0329 | 49.7122 |
| 0.1986 | 15.0 | 1185 | 3.9434 | 0.0329 | 51.5916 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
BobBraico/distilbert-base-uncased-finetuned-imdb | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta_checkpoint-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_checkpoint-finetuned-squad
This model is a fine-tuned version of [WillHeld/roberta-base-coqa](https://huggingface.co/WillHeld/roberta-base-coqa) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8934
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8468 | 1.0 | 5536 | 0.8168 |
| 0.6239 | 2.0 | 11072 | 0.8237 |
| 0.4805 | 3.0 | 16608 | 0.8934 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
BogdanKuloren/continual-learning-paper-embeddings-model | [
"pytorch",
"mpnet",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"MPNetModel"
],
"model_type": "mpnet",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
license: apache-2.0
---
## Chinese-English-mixed ASR model using icefall_conv_emformer2
### Wenetspeech testset results
| TEST_NET | TEST_MEETING |
|----------|--------------|
| 9.64 | 9.2 | |
as log in `decoding_results/modified_beam_search_result`
### Training commond
```
python3 conv_emformer_transducer_stateless2/train.py --world-size 8 --num-epochs 30 --start-epoch 1 --exp-dir conv_emformer_transducer_stateless2/exp --max-duration 400 --master-port 12321 --num-encoder-layers 12 --chunk-length 32 --cnn-module-kernel 31 --left-context-length 32 --right-context-length 8 --memory-size 32
```
### Model unit is char+bpe as `data/lang_char_bpe/tokens.txt`
|
BonjinKim/dst_kor_bert | [
"pytorch",
"jax",
"bert",
"pretraining",
"transformers"
]
| null | {
"architectures": [
"BertForPreTraining"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-sngp-squad-seed-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sngp-squad-seed-42
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9074
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.4521 | 1.0 | 8248 | 2.0439 |
| 2.1298 | 2.0 | 16496 | 1.9074 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Botjallu/DialoGPT-small-harrypotter | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/kzipa/ddpm-butterflies-128/tensorboard?#scalars)
|
Branex/gpt-neo-2.7B | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-30T10:38:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7096 | 1.0 | 157 | 2.4928 |
| 2.5783 | 2.0 | 314 | 2.4239 |
| 2.528 | 3.0 | 471 | 2.4358 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
CAUKiel/JavaBERT | [
"pytorch",
"safetensors",
"bert",
"fill-mask",
"code",
"arxiv:2110.10404",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 388 | 2022-11-30T14:47:59Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(juancopi81/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
CBreit00/DialoGPT_small_Rick | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: sklearn
tags:
- sklearn
- skops
- tabular-regression
model_file: pipeline.skops
widget:
structuredData:
acceleration:
- 20.7
- 17.0
- 18.6
cylinders:
- 4
- 4
- 4
displacement:
- 98.0
- 120.0
- 120.0
horsepower:
- '65'
- '88'
- '79'
model year:
- 81
- 75
- 82
origin:
- 1
- 2
- 1
weight:
- 2380
- 2957
- 2625
---
# Model description
This is a regression model on MPG dataset trained for this [kaggle tutorial](https://www.kaggle.com/unofficialmerve/persisting-your-scikit-learn-model-using-skops/).
## Intended uses & limitations
This model is not ready to be used in production.
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|--------------------------|---------------|
| ccp_alpha | 0.0 |
| criterion | squared_error |
| max_depth | |
| max_features | |
| max_leaf_nodes | |
| min_impurity_decrease | 0.0 |
| min_samples_leaf | 1 |
| min_samples_split | 2 |
| min_weight_fraction_leaf | 0.0 |
| random_state | |
| splitter | best |
</details>
### Model Plot
The model plot is below.
<style>#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 {color: black;background-color: white;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 pre{padding: 0;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-toggleable {background-color: white;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-estimator:hover {background-color: #d4ebff;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-item {z-index: 1;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-parallel-item:only-child::after {width: 0;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3 div.sk-text-repr-fallback {display: none;}</style><div id="sk-3ea712fc-223a-4e18-9d66-e9fdc5d944b3" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>DecisionTreeRegressor()</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="37ade0f5-01f0-4181-acab-e7150c3b5fa2" type="checkbox" checked><label for="37ade0f5-01f0-4181-acab-e7150c3b5fa2" class="sk-toggleable__label sk-toggleable__label-arrow">DecisionTreeRegressor</label><div class="sk-toggleable__content"><pre>DecisionTreeRegressor()</pre></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|--------------------|---------------------------------------|
| Mean Squared Error | 10.86399394359616 |
| R-Squared | <function r2_score at 0x7f743fc54b00> |
# How to Get Started with the Model
Use the code below to get started with the model.
```python
from skops.io import load
import json
import pandas as pd
clf = load("pipeline.skops")
with open("config.json") as f:
config = json.load(f)
clf.predict(pd.DataFrame.from_dict(config["sklearn"]["example_input"]))
```
# Model Card Authors
This model card is written by following authors:
[More Information Needed]
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
``` |
CLTL/icf-levels-adm | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Cartpole_REINFORCE
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 146.70 +/- 25.78
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
CM-CA/DialoGPT-small-cartman | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Leo446673/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Cameron/BERT-SBIC-targetcategory | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
model-index:
- name: TweetEval_XLNET_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: train
args: sentiment
metrics:
- name: Accuracy
type: accuracy
value: 0.9333333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TweetEval_XLNET_5E
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4591
- Accuracy: 0.9333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5575 | 0.04 | 50 | 0.2675 | 0.9 |
| 0.4177 | 0.08 | 100 | 0.2193 | 0.9067 |
| 0.2911 | 0.12 | 150 | 0.2482 | 0.9 |
| 0.3503 | 0.16 | 200 | 0.2424 | 0.9 |
| 0.3412 | 0.2 | 250 | 0.1913 | 0.9267 |
| 0.2747 | 0.24 | 300 | 0.1783 | 0.92 |
| 0.2999 | 0.28 | 350 | 0.2495 | 0.9133 |
| 0.3141 | 0.32 | 400 | 0.2460 | 0.9 |
| 0.2935 | 0.37 | 450 | 0.2034 | 0.92 |
| 0.2619 | 0.41 | 500 | 0.2600 | 0.9067 |
| 0.2454 | 0.45 | 550 | 0.2178 | 0.92 |
| 0.2809 | 0.49 | 600 | 0.2254 | 0.9133 |
| 0.288 | 0.53 | 650 | 0.1849 | 0.92 |
| 0.2769 | 0.57 | 700 | 0.1896 | 0.9267 |
| 0.3079 | 0.61 | 750 | 0.2153 | 0.9133 |
| 0.2598 | 0.65 | 800 | 0.3279 | 0.9067 |
| 0.3149 | 0.69 | 850 | 0.1985 | 0.92 |
| 0.2872 | 0.73 | 900 | 0.1801 | 0.9333 |
| 0.2554 | 0.77 | 950 | 0.2023 | 0.9267 |
| 0.2645 | 0.81 | 1000 | 0.2208 | 0.9067 |
| 0.2509 | 0.85 | 1050 | 0.2012 | 0.9333 |
| 0.2404 | 0.89 | 1100 | 0.1995 | 0.9067 |
| 0.2361 | 0.93 | 1150 | 0.1808 | 0.9133 |
| 0.2298 | 0.97 | 1200 | 0.2226 | 0.9333 |
| 0.193 | 1.01 | 1250 | 0.2535 | 0.9267 |
| 0.1603 | 1.06 | 1300 | 0.2163 | 0.9467 |
| 0.1916 | 1.1 | 1350 | 0.2479 | 0.92 |
| 0.1963 | 1.14 | 1400 | 0.1964 | 0.94 |
| 0.1667 | 1.18 | 1450 | 0.3139 | 0.9133 |
| 0.1668 | 1.22 | 1500 | 0.2204 | 0.9267 |
| 0.1677 | 1.26 | 1550 | 0.2468 | 0.9333 |
| 0.1601 | 1.3 | 1600 | 0.2394 | 0.94 |
| 0.1714 | 1.34 | 1650 | 0.2326 | 0.94 |
| 0.197 | 1.38 | 1700 | 0.1861 | 0.94 |
| 0.1777 | 1.42 | 1750 | 0.2518 | 0.94 |
| 0.1925 | 1.46 | 1800 | 0.1806 | 0.94 |
| 0.2068 | 1.5 | 1850 | 0.1319 | 0.9467 |
| 0.1716 | 1.54 | 1900 | 0.1199 | 0.9667 |
| 0.1442 | 1.58 | 1950 | 0.1694 | 0.96 |
| 0.1929 | 1.62 | 2000 | 0.1990 | 0.9467 |
| 0.1654 | 1.66 | 2050 | 0.2972 | 0.9333 |
| 0.1759 | 1.7 | 2100 | 0.1584 | 0.9467 |
| 0.1788 | 1.75 | 2150 | 0.2266 | 0.94 |
| 0.1796 | 1.79 | 2200 | 0.2746 | 0.9333 |
| 0.172 | 1.83 | 2250 | 0.2313 | 0.9333 |
| 0.1637 | 1.87 | 2300 | 0.2918 | 0.9267 |
| 0.2359 | 1.91 | 2350 | 0.2121 | 0.9267 |
| 0.1778 | 1.95 | 2400 | 0.2022 | 0.9333 |
| 0.1581 | 1.99 | 2450 | 0.2936 | 0.9067 |
| 0.1312 | 2.03 | 2500 | 0.2531 | 0.9333 |
| 0.1178 | 2.07 | 2550 | 0.2525 | 0.9267 |
| 0.0924 | 2.11 | 2600 | 0.2715 | 0.9333 |
| 0.0774 | 2.15 | 2650 | 0.2123 | 0.9533 |
| 0.091 | 2.19 | 2700 | 0.2128 | 0.9467 |
| 0.0948 | 2.23 | 2750 | 0.2187 | 0.9533 |
| 0.1121 | 2.27 | 2800 | 0.2438 | 0.9467 |
| 0.1259 | 2.31 | 2850 | 0.2197 | 0.9467 |
| 0.0747 | 2.35 | 2900 | 0.2727 | 0.9333 |
| 0.114 | 2.39 | 2950 | 0.3197 | 0.9333 |
| 0.086 | 2.44 | 3000 | 0.3643 | 0.9333 |
| 0.1326 | 2.48 | 3050 | 0.2791 | 0.94 |
| 0.1017 | 2.52 | 3100 | 0.2661 | 0.9333 |
| 0.0719 | 2.56 | 3150 | 0.2797 | 0.94 |
| 0.1424 | 2.6 | 3200 | 0.1819 | 0.96 |
| 0.106 | 2.64 | 3250 | 0.2770 | 0.94 |
| 0.0996 | 2.68 | 3300 | 0.2213 | 0.94 |
| 0.0835 | 2.72 | 3350 | 0.2894 | 0.9333 |
| 0.0808 | 2.76 | 3400 | 0.3424 | 0.9333 |
| 0.1406 | 2.8 | 3450 | 0.2166 | 0.94 |
| 0.0345 | 2.84 | 3500 | 0.3146 | 0.9333 |
| 0.1247 | 2.88 | 3550 | 0.2824 | 0.9467 |
| 0.076 | 2.92 | 3600 | 0.2650 | 0.9467 |
| 0.134 | 2.96 | 3650 | 0.2758 | 0.9267 |
| 0.0521 | 3.0 | 3700 | 0.2693 | 0.9467 |
| 0.0366 | 3.04 | 3750 | 0.3428 | 0.9333 |
| 0.0682 | 3.08 | 3800 | 0.2779 | 0.9533 |
| 0.0624 | 3.12 | 3850 | 0.2563 | 0.9467 |
| 0.0402 | 3.17 | 3900 | 0.3086 | 0.94 |
| 0.052 | 3.21 | 3950 | 0.3324 | 0.94 |
| 0.0579 | 3.25 | 4000 | 0.3165 | 0.9467 |
| 0.0411 | 3.29 | 4050 | 0.3507 | 0.9467 |
| 0.0507 | 3.33 | 4100 | 0.3108 | 0.9533 |
| 0.0326 | 3.37 | 4150 | 0.3645 | 0.94 |
| 0.085 | 3.41 | 4200 | 0.3390 | 0.94 |
| 0.022 | 3.45 | 4250 | 0.3367 | 0.94 |
| 0.0689 | 3.49 | 4300 | 0.3433 | 0.94 |
| 0.0458 | 3.53 | 4350 | 0.3359 | 0.9533 |
| 0.0384 | 3.57 | 4400 | 0.3642 | 0.9467 |
| 0.0415 | 3.61 | 4450 | 0.3429 | 0.9467 |
| 0.0362 | 3.65 | 4500 | 0.3727 | 0.9467 |
| 0.0351 | 3.69 | 4550 | 0.3293 | 0.9467 |
| 0.06 | 3.73 | 4600 | 0.4717 | 0.92 |
| 0.0344 | 3.77 | 4650 | 0.3668 | 0.94 |
| 0.0518 | 3.81 | 4700 | 0.3461 | 0.94 |
| 0.046 | 3.86 | 4750 | 0.4020 | 0.9267 |
| 0.0735 | 3.9 | 4800 | 0.2660 | 0.9467 |
| 0.0453 | 3.94 | 4850 | 0.3364 | 0.9333 |
| 0.039 | 3.98 | 4900 | 0.4398 | 0.92 |
| 0.0497 | 4.02 | 4950 | 0.3476 | 0.94 |
| 0.0183 | 4.06 | 5000 | 0.3871 | 0.94 |
| 0.0558 | 4.1 | 5050 | 0.4066 | 0.9267 |
| 0.0358 | 4.14 | 5100 | 0.3926 | 0.92 |
| 0.0507 | 4.18 | 5150 | 0.3312 | 0.9467 |
| 0.0111 | 4.22 | 5200 | 0.3976 | 0.9267 |
| 0.0363 | 4.26 | 5250 | 0.4753 | 0.92 |
| 0.0283 | 4.3 | 5300 | 0.4234 | 0.9267 |
| 0.0097 | 4.34 | 5350 | 0.4547 | 0.9333 |
| 0.0018 | 4.38 | 5400 | 0.4687 | 0.9267 |
| 0.0344 | 4.42 | 5450 | 0.4274 | 0.9333 |
| 0.021 | 4.46 | 5500 | 0.4448 | 0.9333 |
| 0.0092 | 4.5 | 5550 | 0.4672 | 0.9333 |
| 0.0354 | 4.55 | 5600 | 0.4666 | 0.9333 |
| 0.029 | 4.59 | 5650 | 0.4614 | 0.9333 |
| 0.0182 | 4.63 | 5700 | 0.4840 | 0.9333 |
| 0.043 | 4.67 | 5750 | 0.4327 | 0.9333 |
| 0.0259 | 4.71 | 5800 | 0.4639 | 0.9333 |
| 0.0224 | 4.75 | 5850 | 0.4607 | 0.9333 |
| 0.0302 | 4.79 | 5900 | 0.4606 | 0.9333 |
| 0.0224 | 4.83 | 5950 | 0.4654 | 0.9333 |
| 0.0431 | 4.87 | 6000 | 0.4681 | 0.9333 |
| 0.0284 | 4.91 | 6050 | 0.4622 | 0.9333 |
| 0.0326 | 4.95 | 6100 | 0.4602 | 0.9333 |
| 0.018 | 4.99 | 6150 | 0.4591 | 0.9333 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Capreolus/birch-bert-large-car_mb | [
"pytorch",
"tf",
"jax",
"bert",
"next-sentence-prediction",
"transformers"
]
| null | {
"architectures": [
"BertForNextSentencePrediction"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-credit_cards-6-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-credit_cards-6-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3376
- Accuracy: 0.3186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.75 | 1.0 | 1 | 2.5769 | 0.2389 |
| 2.178 | 2.0 | 2 | 2.4879 | 0.2389 |
| 1.769 | 3.0 | 3 | 2.4180 | 0.2566 |
| 1.4703 | 4.0 | 4 | 2.3657 | 0.3097 |
| 1.2711 | 5.0 | 5 | 2.3376 | 0.3186 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Cedille/fr-boris | [
"pytorch",
"gptj",
"text-generation",
"fr",
"dataset:c4",
"arxiv:2202.03371",
"transformers",
"causal-lm",
"license:mit",
"has_space"
]
| text-generation | {
"architectures": [
"GPTJForCausalLM"
],
"model_type": "gptj",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 401 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-home-1-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-home-1-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3789
- Accuracy: 0.3356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7614 | 1.0 | 1 | 2.6146 | 0.1889 |
| 2.2082 | 2.0 | 2 | 2.5232 | 0.2667 |
| 1.8344 | 3.0 | 3 | 2.4516 | 0.2933 |
| 1.4601 | 4.0 | 4 | 2.4033 | 0.3267 |
| 1.2748 | 5.0 | 5 | 2.3789 | 0.3356 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dccuchile/albert-large-spanish-finetuned-xnli | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-auto_and_commute-1-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-auto_and_commute-1-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2614
- Accuracy: 0.4289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7929 | 1.0 | 1 | 2.5690 | 0.2667 |
| 2.267 | 2.0 | 2 | 2.4558 | 0.3533 |
| 1.8495 | 3.0 | 3 | 2.3630 | 0.3911 |
| 1.4397 | 4.0 | 4 | 2.2956 | 0.4133 |
| 1.2985 | 5.0 | 5 | 2.2614 | 0.4289 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dccuchile/albert-xlarge-spanish-finetuned-mldoc | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-auto_and_commute-5-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-auto_and_commute-5-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2614
- Accuracy: 0.4289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7929 | 1.0 | 1 | 2.5690 | 0.2667 |
| 2.267 | 2.0 | 2 | 2.4558 | 0.3533 |
| 1.8495 | 3.0 | 3 | 2.3630 | 0.3911 |
| 1.4397 | 4.0 | 4 | 2.2956 | 0.4133 |
| 1.2985 | 5.0 | 5 | 2.2614 | 0.4289 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dccuchile/albert-xlarge-spanish-finetuned-ner | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-auto_and_commute-6-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-auto_and_commute-6-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2614
- Accuracy: 0.4289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7929 | 1.0 | 1 | 2.5690 | 0.2667 |
| 2.267 | 2.0 | 2 | 2.4558 | 0.3533 |
| 1.8495 | 3.0 | 3 | 2.3630 | 0.3911 |
| 1.4397 | 4.0 | 4 | 2.2956 | 0.4133 |
| 1.2985 | 5.0 | 5 | 2.2614 | 0.4289 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dccuchile/albert-xlarge-spanish-finetuned-xnli | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-auto_and_commute-9-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-auto_and_commute-9-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2614
- Accuracy: 0.4289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7929 | 1.0 | 1 | 2.5690 | 0.2667 |
| 2.267 | 2.0 | 2 | 2.4558 | 0.3533 |
| 1.8495 | 3.0 | 3 | 2.3630 | 0.3911 |
| 1.4397 | 4.0 | 4 | 2.2956 | 0.4133 |
| 1.2985 | 5.0 | 5 | 2.2614 | 0.4289 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dccuchile/albert-xxlarge-spanish-finetuned-pos | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-travel-3-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-travel-3-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1384
- Accuracy: 0.4289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7625 | 1.0 | 1 | 2.5258 | 0.2933 |
| 2.0955 | 2.0 | 2 | 2.3775 | 0.3333 |
| 1.7076 | 3.0 | 3 | 2.2590 | 0.38 |
| 1.3257 | 4.0 | 4 | 2.1788 | 0.4089 |
| 1.1109 | 5.0 | 5 | 2.1384 | 0.4289 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dccuchile/albert-large-spanish | [
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
]
| null | {
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 75 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-travel-7-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-travel-7-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1384
- Accuracy: 0.4289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7625 | 1.0 | 1 | 2.5258 | 0.2933 |
| 2.0955 | 2.0 | 2 | 2.3775 | 0.3333 |
| 1.7076 | 3.0 | 3 | 2.2590 | 0.38 |
| 1.3257 | 4.0 | 4 | 2.1788 | 0.4089 |
| 1.1109 | 5.0 | 5 | 2.1384 | 0.4289 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dccuchile/albert-tiny-spanish | [
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
]
| null | {
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 393 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-travel-8-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-travel-8-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1384
- Accuracy: 0.4289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7625 | 1.0 | 1 | 2.5258 | 0.2933 |
| 2.0955 | 2.0 | 2 | 2.3775 | 0.3333 |
| 1.7076 | 3.0 | 3 | 2.2590 | 0.38 |
| 1.3257 | 4.0 | 4 | 2.1788 | 0.4089 |
| 1.1109 | 5.0 | 5 | 2.1384 | 0.4289 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-pos | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-utility-3-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-utility-3-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3728
- Accuracy: 0.3956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8194 | 1.0 | 1 | 2.6027 | 0.3156 |
| 2.2337 | 2.0 | 2 | 2.5079 | 0.3778 |
| 1.7996 | 3.0 | 3 | 2.4293 | 0.3822 |
| 1.4591 | 4.0 | 4 | 2.3728 | 0.3956 |
| 1.3205 | 5.0 | 5 | 2.3439 | 0.3956 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-pos | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-utility-8-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-utility-8-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3728
- Accuracy: 0.3956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8194 | 1.0 | 1 | 2.6027 | 0.3156 |
| 2.2337 | 2.0 | 2 | 2.5079 | 0.3778 |
| 1.7996 | 3.0 | 3 | 2.4293 | 0.3822 |
| 1.4591 | 4.0 | 4 | 2.3728 | 0.3956 |
| 1.3205 | 5.0 | 5 | 2.3439 | 0.3956 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dccuchile/distilbert-base-spanish-uncased-finetuned-mldoc | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-utility-9-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-utility-9-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3728
- Accuracy: 0.3956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8194 | 1.0 | 1 | 2.6027 | 0.3156 |
| 2.2337 | 2.0 | 2 | 2.5079 | 0.3778 |
| 1.7996 | 3.0 | 3 | 2.4293 | 0.3822 |
| 1.4591 | 4.0 | 4 | 2.3728 | 0.3956 |
| 1.3205 | 5.0 | 5 | 2.3439 | 0.3956 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dccuchile/distilbert-base-spanish-uncased-finetuned-qa-mlqa | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: all-roberta-large-v1-work-3-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-work-3-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3586
- Accuracy: 0.3689
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8058 | 1.0 | 1 | 2.6169 | 0.2356 |
| 2.3524 | 2.0 | 2 | 2.5215 | 0.2978 |
| 1.9543 | 3.0 | 3 | 2.4427 | 0.3422 |
| 1.5539 | 4.0 | 4 | 2.3874 | 0.36 |
| 1.4133 | 5.0 | 5 | 2.3586 | 0.3689 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Champion/test_upload_vox2_wavlm_epoch8 | [
"sidekit",
"audio"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
illustrator : Mitsuhiro Kimura
license: Futabasha
---from Kobayashi-san Chi No Maid Dragon
from PIL import Image
url = https://static.wikia.nocookie.net/wikiseriesjaponesas/images/d/d4/Kobayashi.png/revision/latest?cb=20170801205650&path-prefix=es
image = https://static.wikia.nocookie.net/wikiseriesjaponesas/images/d/d2/Kobayashi.png/revision/latest?cb=20170801205650&path-prefix=es
feature_extractor = ViTFeatureExtractor.from_pretrained(https://ficcion-sin-limites.fandom.com/es/wiki/Kobayashi
model = ViTModel.from_pretrained('google/vit-base-patch32-224-in21k')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state |
Chan/distilgpt2-finetuned-wikitext2 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-30T20:58:15Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 10 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 4.326394417589792e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 30,
"warmup_steps": 3,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 100, 'do_lower_case': False}) with Transformer model: AlbertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Chandanbhat/distilbert-base-uncased-finetuned-cola | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-dutch-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-dutch-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5834
- eval_wer: 0.3471
- eval_cer: 0.1181
- eval_runtime: 338.6313
- eval_samples_per_second: 14.582
- eval_steps_per_second: 1.825
- epoch: 14.87
- step: 4000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Cheatham/xlm-roberta-large-finetuned-d12 | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20 | null | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: DLL888/deberta-v3-base-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# DLL888/deberta-v3-base-squad
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the [SQuAD](https://huggingface.co/datasets/squad) dataset.
It achieves the following results on the evaluation set:
- Exact Match: 88.08893093661305
- F1: 93.75543944888847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training Machine
Trained in Google Colab Pro with the following specs:
- A100-SXM4-40GB
- NVIDIA-SMI 460.32.03
- Driver Version: 460.32.03
- CUDA Version: 11.2
Training took about 26 minutes for two epochs.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 10538, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 500, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.0540 | 0.7261 | 0.6885 | 0.7617 | 0.7841 | 0.7530 | 0 |
| 0.6248 | 0.8212 | 0.7777 | 0.7594 | 0.7873 | 0.7569 | 1 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Cheatham/xlm-roberta-large-finetuned-d12_2 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/blewglass/1669844278462/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1589805873366724610/ifGVL-6g_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">come back clammy</div>
<div style="text-align: center; font-size: 14px;">@blewglass</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from come back clammy.
| Data | come back clammy |
| --- | --- |
| Tweets downloaded | 3174 |
| Retweets | 582 |
| Short tweets | 317 |
| Tweets kept | 2275 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3cybl684/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @blewglass's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zifv54gk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zifv54gk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/blewglass')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Cheatham/xlm-roberta-large-finetuned-d1r01 | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 21 | null | ---
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/tuwonga/supermarionation/resolve/main/supermarionation_prev1.jpg"
tags:
- stable-diffusion
- text-to-image
---
### supermarionation
This is a fine-tuned Stable Diffusion model (based on v1.5) trained on screenshots from Gerry Anderson **_Supermarionation_** stop motion animation movie, basically from **_Thunderbirds_** tv series. Use the token **_supermarionation_** in your prompts to use the style.
_Download the ckpt file from "files and versions" tab into the stable diffusion models folder of your web-ui of choice._
_I've found interesting (and really funny ^^) the output in the img2img. You can see the results in the second and third pic (original/img2img). You can play around with denoising strength (40-70) and activate or not the restore face option._
### supermarionation v2
In this version I've trained characters and vehicles. 47 images and 9400 steps, 20% text encoder.
-- **Characters and vehicles rendered with this model:**

_prompt and settings used: **[person/vehicle] in supermarionation style** | **Steps: 30, Sampler: Euler, CFG scale: 7.5**_
**Characters rendered with img2img:**

_prompt and settings used: **[person] in supermarionation style** | **Steps: 30 - you can play around with settings**_
**Characters rendered with supermarionation in txt2img:**

_prompt and settings used: **[person] in supermarionation style** | **Steps: 40 - you can play around with settings**_
**Characters rendered with supermarionation in img2img:**

_prompt and settings used: **[person] in supermarionation style** | **Steps: 40 - you can play around with settings**_
--
Supermarionation v1 was trained with Dreambooth training by TheLastBen, using 43 images at 8600 steps with 18% of text encoder.
--
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
Cheatham/xlm-roberta-large-finetuned | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20 | 2022-11-30T21:47:38Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Stable Diffusion - Butterflies, 32px
Model developed for the Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class).
This model is a diffusion model for unconditional image generation of cute butterflies 🦋.
It is trained on a very small collection of 1'000 pictures and trained for 30 epochs.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('alkiskoudounas/sd-butterflies-32px')
image = pipeline().images[0]
image
```
|
Cheatham/xlm-roberta-large-finetuned3 | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 22 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/poisonjr/1669845035713/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1582446449228382209/8JRLlVu__400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">gale na</div>
<div style="text-align: center; font-size: 14px;">@poisonjr</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from gale na.
| Data | gale na |
| --- | --- |
| Tweets downloaded | 3204 |
| Retweets | 731 |
| Short tweets | 782 |
| Tweets kept | 1691 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/33t9oiqy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @poisonjr's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3c5vn57r) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3c5vn57r/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/poisonjr')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Cheatham/xlm-roberta-large-finetuned4 | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/kelseyhightower-mipsytipsy-rakyll/1669845299643/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1204077305271705606/j5XjhPAt_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1576759705933819904/iDotz1Gw_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1492548437996310529/waX1aEU-_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Kelsey Hightower & Charity Majors & Jaana Dogan ヤナ ドガン</div>
<div style="text-align: center; font-size: 14px;">@kelseyhightower-mipsytipsy-rakyll</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Kelsey Hightower & Charity Majors & Jaana Dogan ヤナ ドガン.
| Data | Kelsey Hightower | Charity Majors | Jaana Dogan ヤナ ドガン |
| --- | --- | --- | --- |
| Tweets downloaded | 3227 | 3194 | 3223 |
| Retweets | 464 | 509 | 297 |
| Short tweets | 246 | 415 | 240 |
| Tweets kept | 2517 | 2270 | 2686 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3shpfqlw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kelseyhightower-mipsytipsy-rakyll's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2kgnzkmq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2kgnzkmq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kelseyhightower-mipsytipsy-rakyll')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Chertilasus/main | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-30T22:15:02Z | ---
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
widget:
structuredData:
sepal_length:
- 6.3
- 6.5
- 5.6
sepal_width:
- 3.3
- 3.0
- 2.5
---
### Linear Regression Model
This Linear Regression model trained on Iris dataset as a regular numpy array with 2-dimensional.
Goal is to test this pr -> https://github.com/skops-dev/skops/pull/211
|
Chester/traffic-rec | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-30T22:22:07Z | ---
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
widget:
structuredData:
sepal_length:
- 6.3
- 6.5
- 5.6
---
### Linear Regression Model
This Linear Regression model trained on Iris dataset as a regular numpy array with 1-dimensional.
Goal is to test this pr -> https://github.com/skops-dev/skops/pull/211 |
Chikita1/www_stash_stock | [
"license:bsd-3-clause-clear"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-30T22:25:46Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('noobmldude/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Ching/negation_detector | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2022-11-30T22:35:32Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Stable Diffusion - Butterflies, 64px
Model developed for the Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class).
This model is a diffusion model for unconditional image generation of cute butterflies 🦋.
It is trained on a very small collection of 1'000 pictures and trained for 30 epochs.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('alkiskoudounas/sd-butterflies-64px')
image = pipeline().images[0]
image
```
|
Chinmay/mlindia | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-30T22:38:12Z | Author: Varun Pai
Website: https://www.varunlpai.com/ |
Chiuchiyin/DialoGPT-small-Donald | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-11-30T22:39:42Z | ---
license: cc-by-4.0
---
# GenRead: FiD model trained on WebQ
-- This is the model checkpoint of GenRead [2], based on the T5-3B and trained on the WebQ dataset [1].
-- Hyperparameters: 8 x 80GB A100 GPUs; batch size 16; AdamW; LR 5e-5; best dev at 11500 steps.
References:
[1] Semantic parsing on freebase from question-answer pairs. EMNLP 2013.
[2] Generate rather than Retrieve: Large Language Models are Strong Context Generators. arXiv 2022
## Model performance
We evaluate it on the WebQ dataset, the EM score is 54.36.
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
---
license: cc-by-4.0
---
---
license: cc-by-4.0
---
|
ChrisVCB/DialoGPT-medium-ej | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | 2022-11-30T23:15:02Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0294
- Rouge1: 16.4909
- Rouge2: 7.9422
- Rougel: 16.3139
- Rougelsum: 16.3615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 6.5928 | 1.0 | 1209 | 3.3005 | 14.6517 | 6.5194 | 14.3474 | 14.2801 |
| 3.9024 | 2.0 | 2418 | 3.1399 | 16.744 | 8.6706 | 16.0952 | 16.1512 |
| 3.5806 | 3.0 | 3627 | 3.0869 | 18.0041 | 9.2385 | 17.718 | 17.6889 |
| 3.4201 | 4.0 | 4836 | 3.0590 | 17.5844 | 8.972 | 17.1709 | 17.2169 |
| 3.3202 | 5.0 | 6045 | 3.0598 | 17.5762 | 8.6036 | 17.3677 | 17.3708 |
| 3.2436 | 6.0 | 7254 | 3.0409 | 16.7641 | 8.19 | 16.6109 | 16.5899 |
| 3.2079 | 7.0 | 8463 | 3.0332 | 16.6917 | 8.1747 | 16.4958 | 16.527 |
| 3.1801 | 8.0 | 9672 | 3.0294 | 16.4909 | 7.9422 | 16.3139 | 16.3615 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Chun/DialoGPT-large-dailydialog | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- precision
- recall
model-index:
- name: bert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- name: Precision
type: precision
value: 0.7311211804904578
- name: Recall
type: recall
value: 0.7298750848074663
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-emotion
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1658
- Precision: 0.7311
- Recall: 0.7299
- Fscore: 0.7299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.8562 | 1.0 | 815 | 0.7859 | 0.7527 | 0.6006 | 0.6173 |
| 0.5352 | 2.0 | 1630 | 0.9248 | 0.7545 | 0.7188 | 0.7293 |
| 0.2543 | 3.0 | 2445 | 1.1658 | 0.7311 | 0.7299 | 0.7299 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Chun/DialoGPT-medium-dailydialog | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | 2022-11-30T23:58:33Z | ---
license: openrail
---
Textual inversion embedding for SD2.0 and 768-v-ema.ckpt
Converts even simple promts into landscape paintings
it was trained with no people in the dataset, but it can also work for them
all examples are made with 20 steps, DPM++ 2M Karras and CFG 7 and 768x768 resolution, no other promts, no negatives
moon station, white block buildings, foreign planet, painted_landscape:

asteroid impact, huge sun, rocky landscape, painted_landscape:

foreign science fiction planet, weird tangled trees, purple plants:

black swamp, witch hut, mud water, painted_landscape:

small town marketplace, store and shop, medieval buildings, painted_landscape:

demon devil black stone tower, hellfire landscape, bone trees, painted_landscape:

vulcano, eruption, lava, painted_landscape:

mega cyberpunk city, neon lights, skyscraper, painted_landscape:

castle, high mountains, elvish architecture, painted_landscape:

portrait of a woman, red long hair, long black coat, painted_landscape:

portrait of a woman, top bun hair, frilly blue dress, painted_landscape:

portrait of a old man, grey bushy beard, catching a fish, painted_landscape:

|
Chun/DialoGPT-small-dailydialog | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | 2022-11-30T23:59:06Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- wit-400m
- imagenet-12k
---
# Model card for vit_base_patch16_clip_384.openai_ft_in12k_in1k
A Vision Transformer (ViT) image classification model. Pretrained on WIT-400M image-text pairs by OpenAI using CLIP. Fine-tuned on ImageNet-12k and then ImageNet-1k in `timm`. See recipes in [Reproducible scaling laws](https://arxiv.org/abs/2212.07143).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 86.9
- GMACs: 49.4
- Activations (M): 48.3
- Image size: 384 x 384
- **Papers:**
- Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020
- Reproducible scaling laws for contrastive language-image learning: https://arxiv.org/abs/2212.07143
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:**
- WIT-400M
- ImageNet-12k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_base_patch16_clip_384.openai_ft_in12k_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_patch16_clip_384.openai_ft_in12k_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 577, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@article{cherti2022reproducible,
title={Reproducible scaling laws for contrastive language-image learning},
author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia},
journal={arXiv preprint arXiv:2212.07143},
year={2022}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
Chun/w-zh2en-mto | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-12-01T00:15:23Z | ---
library_name: keras
tags:
- plant-classification
- image-classification
---
# Classification of Grape Varieties using Convolutional Neural Network Models
The full credit goes to: [Gabriel Carneiro] (https://www.linkedin.com/in/gabriel-carneiro-81a13a64/)
## Supported varieties
- Códega
- Moscatel Galego
- Rabigato
- Tinta Roriz
- Tinto Cao
- Touriga Nacional
## Explainable AI support
We also supported algorithms of Explainable AI. The Grad-CAM, Grad-CAM++, and LIME.
|
Chungu424/repo | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
datasets:
- cardiffnlp/tweet_sentiment_multilingual
metrics:
- f1
- accuracy
model-index:
- name: cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: cardiffnlp/tweet_sentiment_multilingual
type: all
split: test
metrics:
- name: Micro F1 (cardiffnlp/tweet_sentiment_multilingual/all)
type: micro_f1_cardiffnlp/tweet_sentiment_multilingual/all
value: 0.6931034482758621
- name: Macro F1 (cardiffnlp/tweet_sentiment_multilingual/all)
type: micro_f1_cardiffnlp/tweet_sentiment_multilingual/all
value: 0.692628774202147
- name: Accuracy (cardiffnlp/tweet_sentiment_multilingual/all)
type: accuracy_cardiffnlp/tweet_sentiment_multilingual/all
value: 0.6931034482758621
pipeline_tag: text-classification
widget:
- text: Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}
example_title: "topic_classification 1"
- text: Yes, including Medicare and social security saving👍
example_title: "sentiment 1"
- text: All two of them taste like ass.
example_title: "offensive 1"
- text: If you wanna look like a badass, have drama on social media
example_title: "irony 1"
- text: Whoever just unfollowed me you a bitch
example_title: "hate 1"
- text: I love swimming for the same reason I love meditating...the feeling of weightlessness.
example_title: "emotion 1"
- text: Beautiful sunset last night from the pontoon @TupperLakeNY
example_title: "emoji 1"
---
# cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual
This model is a fine-tuned version of [cardiffnlp/twitter-xlm-roberta-base](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base) on the
[`cardiffnlp/tweet_sentiment_multilingual (all)`](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)
via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp).
Training split is `train` and parameters have been tuned on the validation split `validation`.
Following metrics are achieved on the test split `test` ([link](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual/raw/main/metric.json)).
- F1 (micro): 0.6931034482758621
- F1 (macro): 0.692628774202147
- Accuracy: 0.6931034482758621
### Usage
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp
model = tweetnlp.Classifier("cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual", max_length=128)
model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')
```
### Reference
```
@inproceedings{dimosthenis-etal-2022-twitter,
title = "{T}witter {T}opic {C}lassification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics"
}
```
|
Chuu/Chumar | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: TowerBuilding
type: TowerBuilding
metrics:
- type: mean_reward
value: nan +/- nan
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **TowerBuilding** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
CoachCarter/distilbert-base-uncased-finetuned-squad | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | This repo contains pre-trained models, checkpoints, training logs and decoding results of the following pull-request:
https://github.com/k2-fsa/icefall/pull/683
|
CoachCarter/distilbert-base-uncased | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8665180357857429
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1457
- F1: 0.8665
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2537 | 1.0 | 1049 | 0.1758 | 0.8236 |
| 0.1335 | 2.0 | 2098 | 0.1442 | 0.8494 |
| 0.0811 | 3.0 | 3147 | 0.1457 | 0.8665 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
CodeDanCode/CartmenBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
datasets:
- cardiffnlp/tweet_sentiment_multilingual
metrics:
- f1
- accuracy
model-index:
- name: cardiffnlp/bert-base-multilingual-cased-sentiment-multilingual
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: cardiffnlp/tweet_sentiment_multilingual
type: all
split: test
metrics:
- name: Micro F1 (cardiffnlp/tweet_sentiment_multilingual/all)
type: micro_f1_cardiffnlp/tweet_sentiment_multilingual/all
value: 0.6169540229885058
- name: Macro F1 (cardiffnlp/tweet_sentiment_multilingual/all)
type: micro_f1_cardiffnlp/tweet_sentiment_multilingual/all
value: 0.6168385894019698
- name: Accuracy (cardiffnlp/tweet_sentiment_multilingual/all)
type: accuracy_cardiffnlp/tweet_sentiment_multilingual/all
value: 0.6169540229885058
pipeline_tag: text-classification
widget:
- text: Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}
example_title: "topic_classification 1"
- text: Yes, including Medicare and social security saving👍
example_title: "sentiment 1"
- text: All two of them taste like ass.
example_title: "offensive 1"
- text: If you wanna look like a badass, have drama on social media
example_title: "irony 1"
- text: Whoever just unfollowed me you a bitch
example_title: "hate 1"
- text: I love swimming for the same reason I love meditating...the feeling of weightlessness.
example_title: "emotion 1"
- text: Beautiful sunset last night from the pontoon @TupperLakeNY
example_title: "emoji 1"
---
# cardiffnlp/bert-base-multilingual-cased-sentiment-multilingual
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the
[`cardiffnlp/tweet_sentiment_multilingual (all)`](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)
via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp).
Training split is `train` and parameters have been tuned on the validation split `validation`.
Following metrics are achieved on the test split `test` ([link](https://huggingface.co/cardiffnlp/bert-base-multilingual-cased-sentiment-multilingual/raw/main/metric.json)).
- F1 (micro): 0.6169540229885058
- F1 (macro): 0.6168385894019698
- Accuracy: 0.6169540229885058
### Usage
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp
model = tweetnlp.Classifier("cardiffnlp/bert-base-multilingual-cased-sentiment-multilingual", max_length=128)
model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')
```
### Reference
```
@inproceedings{dimosthenis-etal-2022-twitter,
title = "{T}witter {T}opic {C}lassification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics"
}
```
|
CodeDanCode/SP-KyleBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: t5-samsung
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: train
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 42.2345
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-samsung
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8153
- Rouge1: 42.2345
- Rouge2: 18.983
- Rougel: 33.0073
- Rougelsum: 38.8755
- Gen Len: 36.4242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.0028 | 1.0 | 1841 | 1.8153 | 42.2345 | 18.983 | 33.0073 | 38.8755 | 36.4242 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
CodeMonkey98/distilroberta-base-finetuned-wikitext2 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
model-index:
- name: multi-label-class-classification-on-github-issues
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multi-label-class-classification-on-github-issues
This model is a fine-tuned version of [neuralmagic/oBERT-12-upstream-pruned-unstructured-97](https://huggingface.co/neuralmagic/oBERT-12-upstream-pruned-unstructured-97) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1077
- Micro f1: 0.6520
- Macro f1: 0.0704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Micro f1 | Macro f1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| No log | 1.0 | 49 | 0.2835 | 0.3791 | 0.0172 |
| No log | 2.0 | 98 | 0.1710 | 0.3791 | 0.0172 |
| No log | 3.0 | 147 | 0.1433 | 0.3791 | 0.0172 |
| No log | 4.0 | 196 | 0.1333 | 0.4540 | 0.0291 |
| No log | 5.0 | 245 | 0.1247 | 0.5206 | 0.0352 |
| No log | 6.0 | 294 | 0.1173 | 0.6003 | 0.0541 |
| No log | 7.0 | 343 | 0.1125 | 0.6315 | 0.0671 |
| No log | 8.0 | 392 | 0.1095 | 0.6439 | 0.0699 |
| No log | 9.0 | 441 | 0.1072 | 0.6531 | 0.0713 |
| No log | 10.0 | 490 | 0.1075 | 0.6397 | 0.0695 |
| 0.1605 | 11.0 | 539 | 0.1074 | 0.6591 | 0.0711 |
| 0.1605 | 12.0 | 588 | 0.1043 | 0.6462 | 0.0703 |
| 0.1605 | 13.0 | 637 | 0.1049 | 0.6541 | 0.0709 |
| 0.1605 | 14.0 | 686 | 0.1051 | 0.6524 | 0.0713 |
| 0.1605 | 15.0 | 735 | 0.1061 | 0.6535 | 0.0770 |
| 0.1605 | 16.0 | 784 | 0.1034 | 0.6511 | 0.0708 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
CodeNinja1126/bert-p-encoder | [
"pytorch"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language: ko
license: cc-by-4.0
tags:
- seq2seq
widget:
- text: "question: 조선 중기의 무신인 이순신이 태어난 날짜는? title: 이순신 context: 이순신(李舜臣, 1545년 4월 28일 (음력 3월 8일) ~ 1598년 12월 16일 (음력 11월 19일))은 조선 중기의 무신이었다. 본관은 덕수(德水), 자는 여해(汝諧), 시호는 충무(忠武)였으며, 한성 출신이었다. 문반 가문 출신으로 1576년(선조 9년) 무과(武科)에 급제[2]하여 그 관직이 동구비보 권관, 훈련원 봉사, 발포진 수군만호, 조산보 만호, 전라좌도수사를 거쳐 정헌대부 삼도수군통제사에 이르렀다."
- text: "question: 함장 마쓰오카 바키치는 배를 조정하는 명수로 로프 하나 손상되지 않았다고 말한 사람은? title: 반류마루 context: 일련의 하코다테 전쟁은 적아 쌍방의 문서에 마쓰오카 바키치 함장의 능란한 조함 능력과 냉정한 지휘만이 기록되어 있다. 함포 사격으로 마쓰마에 성을 공격하여 엄호한 이후, 1869년 메이지 2년 3월 25일 미야코 만 해전에서는 폭풍우를 만나 요함과 헤어졌을 때에 만날 약속했던 하치노헤 항에서 대기하고 있었기 때문에 참전에는 이르지 못했다. 이 폭풍우 때도 “함장 마쓰오카 바키치는 배를 조정하는 명수로 로프 하나 손상되지 않았다”고 타고 있던 하야시 다다스가 남긴 바 있다. 이 귀로에서 신정부 군의 철갑함의 추격을 받았다. 기관 능력의 차이로 인한 속도차 때문에 도주가 불가능하다고 판단하고 맞장 공격을 하겠다고 전투 준비를 했지만, 철갑선의 사정거리에 들어간 순간에 순풍이 불기 시작하여 추격을 뿌리치고 하코다테로 돌아올 수 있었다."
---
# pko-t5-base-finetuned-korquad
[Source Code](https://github.com/paust-team/pko-t5)
|
CodeNinja1126/bert-q-encoder | [
"pytorch"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/yixiaoxu/ddpm-butterflies-128/tensorboard?#scalars)
|
CoffeeAddict93/gpt1-call-of-the-wild | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
datasets:
- cardiffnlp/tweet_sentiment_multilingual
metrics:
- f1
- accuracy
model-index:
- name: cardiffnlp/xlm-roberta-base-sentiment-multilingual
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: cardiffnlp/tweet_sentiment_multilingual
type: all
split: test
metrics:
- name: Micro F1 (cardiffnlp/tweet_sentiment_multilingual/all)
type: micro_f1_cardiffnlp/tweet_sentiment_multilingual/all
value: 0.665948275862069
- name: Macro F1 (cardiffnlp/tweet_sentiment_multilingual/all)
type: micro_f1_cardiffnlp/tweet_sentiment_multilingual/all
value: 0.6628627126803655
- name: Accuracy (cardiffnlp/tweet_sentiment_multilingual/all)
type: accuracy_cardiffnlp/tweet_sentiment_multilingual/all
value: 0.665948275862069
pipeline_tag: text-classification
widget:
- text: Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}
example_title: "topic_classification 1"
- text: Yes, including Medicare and social security saving👍
example_title: "sentiment 1"
- text: All two of them taste like ass.
example_title: "offensive 1"
- text: If you wanna look like a badass, have drama on social media
example_title: "irony 1"
- text: Whoever just unfollowed me you a bitch
example_title: "hate 1"
- text: I love swimming for the same reason I love meditating...the feeling of weightlessness.
example_title: "emotion 1"
- text: Beautiful sunset last night from the pontoon @TupperLakeNY
example_title: "emoji 1"
---
# cardiffnlp/xlm-roberta-base-sentiment-multilingual
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the
[`cardiffnlp/tweet_sentiment_multilingual (all)`](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)
via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp).
Training split is `train` and parameters have been tuned on the validation split `validation`.
Following metrics are achieved on the test split `test` ([link](https://huggingface.co/cardiffnlp/xlm-roberta-base-sentiment-multilingual/raw/main/metric.json)).
- F1 (micro): 0.665948275862069
- F1 (macro): 0.6628627126803655
- Accuracy: 0.665948275862069
### Usage
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp
model = tweetnlp.Classifier("cardiffnlp/xlm-roberta-base-sentiment-multilingual", max_length=128)
model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')
```
### Reference
```
@inproceedings{dimosthenis-etal-2022-twitter,
title = "{T}witter {T}opic {C}lassification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics"
}
```
|
Contrastive-Tension/BERT-Base-CT | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: bart-finetuned-idl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-idl
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0031
- Bleu: 0.0
- Gen Len: 4.9917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:----:|:-------:|
| 0.2005 | 1.0 | 13874 | 0.1589 | 0.0 | 5.0002 |
| 0.1182 | 2.0 | 27748 | 0.0949 | 0.0 | 4.9924 |
| 0.0983 | 3.0 | 41622 | 0.0778 | 0.0 | 4.9924 |
| 0.0724 | 4.0 | 55496 | 0.0724 | 0.0 | 4.9903 |
| 0.0532 | 5.0 | 69370 | 0.0549 | 0.0 | 4.9928 |
| 0.0458 | 6.0 | 83244 | 0.0463 | 0.0 | 4.9861 |
| 0.0435 | 7.0 | 97118 | 0.0548 | 0.0 | 4.9923 |
| 0.0464 | 8.0 | 110992 | 0.0847 | 0.0 | 4.9899 |
| 0.0317 | 9.0 | 124866 | 0.0303 | 0.0 | 4.9922 |
| 0.0302 | 10.0 | 138740 | 0.0284 | 0.0 | 4.9919 |
| 0.0306 | 11.0 | 152614 | 0.0120 | 0.0 | 4.9919 |
| 0.0224 | 12.0 | 166488 | 0.0462 | 0.0 | 4.9917 |
| 0.0184 | 13.0 | 180362 | 0.0138 | 0.0 | 4.9924 |
| 0.0208 | 14.0 | 194236 | 0.0730 | 0.0 | 4.9919 |
| 0.0149 | 15.0 | 208110 | 0.0126 | 0.0 | 4.992 |
| 0.0161 | 16.0 | 221984 | 0.0100 | 0.0 | 4.9915 |
| 0.0178 | 17.0 | 235858 | 0.0106 | 0.0 | 4.992 |
| 0.0116 | 18.0 | 249732 | 0.0149 | 0.0 | 4.9921 |
| 0.0096 | 19.0 | 263606 | 0.0085 | 0.0 | 4.9918 |
| 0.0094 | 20.0 | 277480 | 0.0101 | 0.0 | 4.9916 |
| 0.0084 | 21.0 | 291354 | 0.0093 | 0.0 | 4.9918 |
| 0.0077 | 22.0 | 305228 | 0.0138 | 0.0 | 4.992 |
| 0.0094 | 23.0 | 319102 | 0.0084 | 0.0 | 4.9918 |
| 0.0079 | 24.0 | 332976 | 0.0058 | 0.0 | 4.9917 |
| 0.006 | 25.0 | 346850 | 0.0067 | 0.0 | 4.9918 |
| 0.0046 | 26.0 | 360724 | 0.0041 | 0.0 | 4.9918 |
| 0.0049 | 27.0 | 374598 | 0.0061 | 0.0 | 4.9919 |
| 0.002 | 28.0 | 388472 | 0.0035 | 0.0 | 4.9918 |
| 0.003 | 29.0 | 402346 | 0.0038 | 0.0 | 4.9917 |
| 0.0027 | 30.0 | 416220 | 0.0050 | 0.0 | 4.9917 |
| 0.001 | 31.0 | 430094 | 0.0063 | 0.0 | 4.9918 |
| 0.0017 | 32.0 | 443968 | 0.0042 | 0.0 | 4.992 |
| 0.0013 | 33.0 | 457842 | 0.0032 | 0.0 | 4.9917 |
| 0.0005 | 34.0 | 471716 | 0.0031 | 0.0 | 4.9917 |
| 0.0003 | 35.0 | 485590 | 0.0031 | 0.0 | 4.9917 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.10.0+cu111
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Corvus/DialoGPT-medium-CaptainPrice-Extended | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language:
- code
license: bsd-3-clause
tags:
- code
- generative
datasets:
- bigcode/the-stack
---
# CodeGen (CodeGen-CSS 350M)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is finetuned on top of the **CodeGen-Multi 350M**, where "Multi" means the model is initialized with *CodeGen-NL 350M* and further pre-trained on a dataset of multiple programming languages, and "350M" refers to the number of trainable parameters.
It has been finetuned on CSS code contained in bigcode/the-stack dataset on huggingface
## Training data
This checkpoint (CodeGen-Multi 350M) was firstly initialized with *CodeGen-NL 350M*, and then pre-trained on [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos), a large-scale dataset of multiple programming languages from GitHub repositories. The data consists of 119.2B tokens and includes C, C++, Go, Java, JavaScript, and Python.
Lastly it has been finetuned on CSS code contained in [bigcode/the-stack](https://huggingface.co/datasets/bigcode/the-stack) dataset on huggingface
## Training procedure
Initially:
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
Finetune:
I fine tuned the 350M model on a single A100 with 40Gb of RAM, with batch size 10 and an input length of 512 tokens
Used 80-90% of the RAM
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-multi")
model = AutoModelForCausalLM.from_pretrained("alecsharpie/codegen_350m_css")
text = ".header-container {"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
``` |
CouchCat/ma_ner_v7_distil | [
"pytorch",
"distilbert",
"token-classification",
"en",
"transformers",
"ner",
"license:mit",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: YELP_ELECTRA_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: train
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.96
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# YELP_ELECTRA_5E
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1658
- Accuracy: 0.96
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6872 | 0.03 | 50 | 0.6751 | 0.5867 |
| 0.6407 | 0.06 | 100 | 0.5811 | 0.86 |
| 0.5551 | 0.1 | 150 | 0.4980 | 0.8667 |
| 0.4784 | 0.13 | 200 | 0.3889 | 0.9333 |
| 0.412 | 0.16 | 250 | 0.3349 | 0.9333 |
| 0.3826 | 0.19 | 300 | 0.3138 | 0.9133 |
| 0.3629 | 0.22 | 350 | 0.2568 | 0.96 |
| 0.335 | 0.26 | 400 | 0.2352 | 0.9333 |
| 0.2966 | 0.29 | 450 | 0.1907 | 0.9667 |
| 0.2776 | 0.32 | 500 | 0.1898 | 0.96 |
| 0.2428 | 0.35 | 550 | 0.1771 | 0.9533 |
| 0.2577 | 0.38 | 600 | 0.1610 | 0.96 |
| 0.2252 | 0.42 | 650 | 0.1503 | 0.96 |
| 0.2273 | 0.45 | 700 | 0.1425 | 0.9667 |
| 0.2155 | 0.48 | 750 | 0.1417 | 0.96 |
| 0.2681 | 0.51 | 800 | 0.1682 | 0.94 |
| 0.195 | 0.54 | 850 | 0.1527 | 0.96 |
| 0.2133 | 0.58 | 900 | 0.1480 | 0.9533 |
| 0.1996 | 0.61 | 950 | 0.1516 | 0.9533 |
| 0.2123 | 0.64 | 1000 | 0.1645 | 0.94 |
| 0.2263 | 0.67 | 1050 | 0.1449 | 0.96 |
| 0.1941 | 0.7 | 1100 | 0.1445 | 0.96 |
| 0.2273 | 0.74 | 1150 | 0.1389 | 0.96 |
| 0.2156 | 0.77 | 1200 | 0.1541 | 0.9533 |
| 0.193 | 0.8 | 1250 | 0.1512 | 0.9533 |
| 0.1851 | 0.83 | 1300 | 0.1949 | 0.92 |
| 0.2041 | 0.86 | 1350 | 0.1531 | 0.96 |
| 0.1924 | 0.9 | 1400 | 0.1640 | 0.9533 |
| 0.2453 | 0.93 | 1450 | 0.1639 | 0.9467 |
| 0.1774 | 0.96 | 1500 | 0.1729 | 0.9467 |
| 0.1999 | 0.99 | 1550 | 0.1618 | 0.94 |
| 0.1998 | 1.02 | 1600 | 0.1628 | 0.9467 |
| 0.1607 | 1.06 | 1650 | 0.1608 | 0.94 |
| 0.1878 | 1.09 | 1700 | 0.1659 | 0.9467 |
| 0.1702 | 1.12 | 1750 | 0.1694 | 0.9467 |
| 0.1711 | 1.15 | 1800 | 0.1666 | 0.9467 |
| 0.1517 | 1.18 | 1850 | 0.1560 | 0.9533 |
| 0.1521 | 1.22 | 1900 | 0.1662 | 0.9467 |
| 0.2297 | 1.25 | 1950 | 0.2137 | 0.94 |
| 0.2046 | 1.28 | 2000 | 0.1793 | 0.94 |
| 0.1869 | 1.31 | 2050 | 0.1673 | 0.9467 |
| 0.1684 | 1.34 | 2100 | 0.1730 | 0.9467 |
| 0.1359 | 1.38 | 2150 | 0.1817 | 0.94 |
| 0.1595 | 1.41 | 2200 | 0.1709 | 0.9467 |
| 0.1458 | 1.44 | 2250 | 0.1660 | 0.94 |
| 0.1518 | 1.47 | 2300 | 0.1735 | 0.9467 |
| 0.1239 | 1.5 | 2350 | 0.1514 | 0.9533 |
| 0.2183 | 1.54 | 2400 | 0.1644 | 0.9467 |
| 0.1678 | 1.57 | 2450 | 0.1578 | 0.9467 |
| 0.1516 | 1.6 | 2500 | 0.1562 | 0.9467 |
| 0.2575 | 1.63 | 2550 | 0.1516 | 0.9467 |
| 0.1576 | 1.66 | 2600 | 0.1684 | 0.9533 |
| 0.1134 | 1.7 | 2650 | 0.1691 | 0.96 |
| 0.2075 | 1.73 | 2700 | 0.1586 | 0.96 |
| 0.1425 | 1.76 | 2750 | 0.1516 | 0.96 |
| 0.1426 | 1.79 | 2800 | 0.1499 | 0.96 |
| 0.1295 | 1.82 | 2850 | 0.1563 | 0.96 |
| 0.1253 | 1.86 | 2900 | 0.1576 | 0.9533 |
| 0.1801 | 1.89 | 2950 | 0.1563 | 0.9533 |
| 0.1513 | 1.92 | 3000 | 0.1522 | 0.96 |
| 0.1204 | 1.95 | 3050 | 0.1604 | 0.9533 |
| 0.2055 | 1.98 | 3100 | 0.1483 | 0.96 |
| 0.1461 | 2.02 | 3150 | 0.1532 | 0.96 |
| 0.1044 | 2.05 | 3200 | 0.1540 | 0.96 |
| 0.116 | 2.08 | 3250 | 0.1604 | 0.96 |
| 0.1098 | 2.11 | 3300 | 0.1632 | 0.96 |
| 0.1259 | 2.14 | 3350 | 0.1640 | 0.96 |
| 0.1137 | 2.18 | 3400 | 0.1684 | 0.9533 |
| 0.135 | 2.21 | 3450 | 0.1568 | 0.9467 |
| 0.1819 | 2.24 | 3500 | 0.1497 | 0.96 |
| 0.1612 | 2.27 | 3550 | 0.1569 | 0.96 |
| 0.1699 | 2.3 | 3600 | 0.1594 | 0.96 |
| 0.1488 | 2.34 | 3650 | 0.1727 | 0.96 |
| 0.1079 | 2.37 | 3700 | 0.1830 | 0.9533 |
| 0.1209 | 2.4 | 3750 | 0.1657 | 0.96 |
| 0.1619 | 2.43 | 3800 | 0.1556 | 0.96 |
| 0.1544 | 2.46 | 3850 | 0.1627 | 0.96 |
| 0.1717 | 2.5 | 3900 | 0.1597 | 0.96 |
| 0.1198 | 2.53 | 3950 | 0.1470 | 0.9467 |
| 0.0922 | 2.56 | 4000 | 0.1643 | 0.96 |
| 0.1399 | 2.59 | 4050 | 0.1577 | 0.9467 |
| 0.1491 | 2.62 | 4100 | 0.1557 | 0.96 |
| 0.146 | 2.66 | 4150 | 0.1596 | 0.96 |
| 0.1617 | 2.69 | 4200 | 0.1608 | 0.96 |
| 0.1463 | 2.72 | 4250 | 0.1601 | 0.9467 |
| 0.1342 | 2.75 | 4300 | 0.1624 | 0.96 |
| 0.1492 | 2.78 | 4350 | 0.1586 | 0.96 |
| 0.1672 | 2.82 | 4400 | 0.1582 | 0.96 |
| 0.1403 | 2.85 | 4450 | 0.1572 | 0.96 |
| 0.1173 | 2.88 | 4500 | 0.1630 | 0.96 |
| 0.1345 | 2.91 | 4550 | 0.1571 | 0.96 |
| 0.171 | 2.94 | 4600 | 0.1562 | 0.96 |
| 0.125 | 2.98 | 4650 | 0.1477 | 0.9533 |
| 0.1494 | 3.01 | 4700 | 0.1404 | 0.96 |
| 0.1234 | 3.04 | 4750 | 0.1494 | 0.96 |
| 0.0926 | 3.07 | 4800 | 0.1538 | 0.96 |
| 0.1188 | 3.1 | 4850 | 0.1565 | 0.96 |
| 0.0986 | 3.13 | 4900 | 0.1679 | 0.96 |
| 0.1242 | 3.17 | 4950 | 0.1686 | 0.96 |
| 0.1193 | 3.2 | 5000 | 0.1688 | 0.96 |
| 0.1548 | 3.23 | 5050 | 0.1639 | 0.96 |
| 0.1216 | 3.26 | 5100 | 0.1601 | 0.96 |
| 0.1068 | 3.29 | 5150 | 0.1799 | 0.94 |
| 0.1582 | 3.33 | 5200 | 0.1594 | 0.96 |
| 0.1454 | 3.36 | 5250 | 0.1594 | 0.96 |
| 0.1631 | 3.39 | 5300 | 0.1555 | 0.96 |
| 0.1323 | 3.42 | 5350 | 0.1548 | 0.9667 |
| 0.145 | 3.45 | 5400 | 0.1573 | 0.9667 |
| 0.1221 | 3.49 | 5450 | 0.1611 | 0.96 |
| 0.1034 | 3.52 | 5500 | 0.1653 | 0.96 |
| 0.1096 | 3.55 | 5550 | 0.1688 | 0.96 |
| 0.096 | 3.58 | 5600 | 0.1690 | 0.9533 |
| 0.1228 | 3.61 | 5650 | 0.1671 | 0.9533 |
| 0.1133 | 3.65 | 5700 | 0.1710 | 0.9533 |
| 0.0939 | 3.68 | 5750 | 0.1772 | 0.96 |
| 0.1252 | 3.71 | 5800 | 0.1706 | 0.9533 |
| 0.0726 | 3.74 | 5850 | 0.1685 | 0.96 |
| 0.1144 | 3.77 | 5900 | 0.1696 | 0.9533 |
| 0.0902 | 3.81 | 5950 | 0.1753 | 0.9533 |
| 0.1462 | 3.84 | 6000 | 0.1699 | 0.96 |
| 0.1019 | 3.87 | 6050 | 0.1677 | 0.96 |
| 0.1374 | 3.9 | 6100 | 0.1727 | 0.96 |
| 0.1246 | 3.93 | 6150 | 0.1711 | 0.96 |
| 0.1026 | 3.97 | 6200 | 0.1728 | 0.96 |
| 0.1081 | 4.0 | 6250 | 0.1745 | 0.96 |
| 0.1014 | 4.03 | 6300 | 0.1760 | 0.9533 |
| 0.1047 | 4.06 | 6350 | 0.1726 | 0.96 |
| 0.0989 | 4.09 | 6400 | 0.1748 | 0.96 |
| 0.117 | 4.13 | 6450 | 0.1736 | 0.96 |
| 0.1499 | 4.16 | 6500 | 0.1755 | 0.96 |
| 0.0911 | 4.19 | 6550 | 0.1761 | 0.96 |
| 0.1165 | 4.22 | 6600 | 0.1734 | 0.96 |
| 0.1072 | 4.25 | 6650 | 0.1693 | 0.96 |
| 0.1166 | 4.29 | 6700 | 0.1703 | 0.96 |
| 0.0987 | 4.32 | 6750 | 0.1715 | 0.9467 |
| 0.0996 | 4.35 | 6800 | 0.1700 | 0.96 |
| 0.1267 | 4.38 | 6850 | 0.1633 | 0.96 |
| 0.1374 | 4.41 | 6900 | 0.1642 | 0.9667 |
| 0.0699 | 4.45 | 6950 | 0.1628 | 0.96 |
| 0.0773 | 4.48 | 7000 | 0.1642 | 0.96 |
| 0.0903 | 4.51 | 7050 | 0.1649 | 0.96 |
| 0.1357 | 4.54 | 7100 | 0.1641 | 0.96 |
| 0.1252 | 4.57 | 7150 | 0.1659 | 0.9667 |
| 0.1013 | 4.61 | 7200 | 0.1663 | 0.96 |
| 0.1071 | 4.64 | 7250 | 0.1653 | 0.96 |
| 0.1094 | 4.67 | 7300 | 0.1671 | 0.96 |
| 0.1103 | 4.7 | 7350 | 0.1650 | 0.96 |
| 0.1169 | 4.73 | 7400 | 0.1656 | 0.96 |
| 0.0858 | 4.77 | 7450 | 0.1651 | 0.96 |
| 0.0925 | 4.8 | 7500 | 0.1669 | 0.96 |
| 0.1572 | 4.83 | 7550 | 0.1663 | 0.96 |
| 0.1125 | 4.86 | 7600 | 0.1655 | 0.96 |
| 0.1011 | 4.89 | 7650 | 0.1654 | 0.96 |
| 0.1307 | 4.93 | 7700 | 0.1656 | 0.96 |
| 0.1195 | 4.96 | 7750 | 0.1656 | 0.96 |
| 0.1004 | 4.99 | 7800 | 0.1658 | 0.96 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Coyotl/DialoGPT-test-last-arthurmorgan | [
"conversational"
]
| conversational | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
widget:
- text: "なんでしょう?"
context: "御社と一度ご一緒したことがあるというデザイナー伊藤美奈の案内で問い合わせ差し上げております。\n現在、3ページほどのサイト製作を依頼できる先を探しているのですが、お見積もり等をお願いすることは可能でしょうか? 大体5段×3ページくらいの分量で、現在①デザインガイドラインと素材をお渡しし、デザインからコーディングまで②こちらでデザインを組コーディングのみ、のどちらかでご依頼できたらと思います。 納期は年始2-3月になるかと思います。\n以上宜しくお願いいたします。"
- text: "なんでしょう?"
context: "株式会社キャンバスご担当者様、初めてメールを送らせていただきます、株式会社フリープラスの阪田と申します。弊社は国内最大級のインバウンドを専門にした大阪の旅行会社で、世界40カ国1,200社以上との取引実績があり、コロナ前は年間約32万人の訪日客の受け入れをしておりました。現在は国境開放に当たりツアーの問い合わせ対応と並行し、アジア、欧米豪に顧客を持つ海外旅行会社へのニーズ調査を元にしたモデルコースの造成や、オンラインを通じた商談会、FAMトリップなど、自治体様や事業者様のインバウンド促進の支援を行っております。この度、弊社主体でデジタルアートを活用した小規模の観光イベントを考えておりまして、お見積りの作成をお願いしたくご連絡した次第です。デジタルアートのイベントに関して特定の規模での費用感を伺えれば幸いです。==========================================FREEPLUS Inc.Destination Management DepartmentKen SakataE-mail [email protected] [email protected] http://www.freeplus.co.jp/■ Osaka head office501 Kitahama Business Kaikan, 2-1-17 Kitahama, Chuo-ku, Osaka City, Osaka 541-0041 JapanTEL:(+81) 6 -7739 - 4331FAX:06 - 6537 - 1637■ Beppu Branch OfficeBeppu City Children's Hall West Wing 2F, Suehirocho 1-3, Beppu, Oita, 874-0938TEL: (+81)06-7739-4331―――――――――――――――――【 FP HOTELS South-Namba 】"
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-japanese-wikipedia-ud-head-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-japanese-wikipedia-ud-head-finetuned-squad
This model is a fine-tuned version of [KoichiYasuoka/bert-large-japanese-wikipedia-ud-head](https://huggingface.co/KoichiYasuoka/bert-large-japanese-wikipedia-ud-head) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9130
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 50 | 1.9136 |
| No log | 2.0 | 100 | 1.9691 |
| No log | 3.0 | 150 | 1.9130 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Craftified/Bob | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
model-index:
- name: CV11_finetuning1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CV11_finetuning1
This model is a fine-tuned version of [Roshana/Wav2Vec1_CV](https://huggingface.co/Roshana/Wav2Vec1_CV) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7162
- Wer: 0.3625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5067 | 0.86 | 400 | 0.6193 | 0.4492 |
| 0.4448 | 1.72 | 800 | 0.6325 | 0.4384 |
| 0.3781 | 2.59 | 1200 | 0.6248 | 0.4197 |
| 0.3172 | 3.45 | 1600 | 0.6408 | 0.4343 |
| 0.2556 | 4.31 | 2000 | 0.6593 | 0.4230 |
| 0.2148 | 5.17 | 2400 | 0.6742 | 0.3987 |
| 0.1779 | 6.03 | 2800 | 0.6658 | 0.3929 |
| 0.1446 | 6.9 | 3200 | 0.6768 | 0.3846 |
| 0.1248 | 7.76 | 3600 | 0.6809 | 0.3804 |
| 0.108 | 8.62 | 4000 | 0.7214 | 0.3683 |
| 0.0938 | 9.48 | 4400 | 0.7162 | 0.3625 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Craig/mGqFiPhu | [
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
]
| feature-extraction | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- zeroth_korean_asr
model-index:
- name: wav2vec2-large-xls-r-300m-korean
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-korean
This model is a fine-tuned version of [teddy322/wav2vec2-large-xls-r-300m-korean](https://huggingface.co/teddy322/wav2vec2-large-xls-r-300m-korean) on the zeroth_korean_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4474
- Wer: 0.3320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1683 | 1.12 | 400 | 0.4871 | 0.4144 |
| 0.2177 | 2.25 | 800 | 0.5225 | 0.4552 |
| 0.1939 | 3.37 | 1200 | 0.5300 | 0.4456 |
| 0.1432 | 4.49 | 1600 | 0.4704 | 0.3850 |
| 0.1047 | 5.62 | 2000 | 0.4951 | 0.3960 |
| 0.0864 | 6.74 | 2400 | 0.4617 | 0.3638 |
| 0.0686 | 7.87 | 2800 | 0.4477 | 0.3393 |
| 0.0538 | 8.99 | 3200 | 0.4474 | 0.3320 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
CrayonShinchan/fine_tune_try_1 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: bigscience-bloom-rail-1.0
tags:
- stable-diffusion
- diffusion
model-index:
- name: bloom-560m-RLHF-SD2-prompter
results: []
datasets:
- Gustavosta/Stable-Diffusion-Prompts
widget:
- text: "<s>Prompt: "
inference:
parameters:
eos_token_id: 2
max_length: 128
do_sample: true
---
# BLOOM-560m RLHF SD2 Prompter
**COLAB DEMO INCLUDING STABLE DIFFUSION: https://colab.research.google.com/github/aicrumb/doohickey/blob/main/rlhf_prompt_tuner.ipynb**
Using RLHF (Reinforcement Learning from Human Feedback) to finetune [mrm8488/bloom-560m-finetuned-sd-prompts](https://hf.co/mrm8488/bloom-560m-finetuned-sd-prompts) further for SD2.0
```
batch_size = 16
learning_rate = 0.001 # this is why I didn't have to spend _forever_ on it
```
Generate extension with "\<s>Prompt: " and whatever your normal prompt is.
I did this myself. I sat down and just ranked images for so long. It's gone through a couple iterations. Only the biases and layernorm weights were trained. The commit messages are a MESS. **First iteration of this project**
donate so i can do this on real hardware : https://github.com/aicrumb/aicrumb/blob/main/README.md
## Example usage
```python
# Install libraries needed to run the models
!pip install transformers diffusers accelerate -qq
# Import the libraries
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
from transformers import pipeline
import torch
# This is the model that the transformer was finetuned to generate prompts for
model_id = "stabilityai/stable-diffusion-2-base"
# Use the Euler scheduler here
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, revision="fp16", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
# Load the transformer model
prompt_pipe = pipeline("text-generation", model="crumb/bloom-560m-RLHF-SD2-prompter")
prompt = "cool landscape"
# Auto-complete prompt
prompt = "<s>Prompt: " + prompt + ","
extended_prompt = prompt_pipe(prompt, do_sample=True, max_length=42)[0]['generated_text']
extended_prompt = extended_prompt[10:]
print("Prompt is now: ", extended_prompt)
# Generate image
image = pipe(extended_prompt).images[0]
image.save("output.png")
image
```
*Prompt is now: cool landscape, concept art*

*Prompt is now: cool landscape, concept art, sharp focus, digital painting*

short additions, they work though I guess (results vary)
It's also very good at generating prompts by itself, with just the "Prompt:" prompt.
*\<s>Prompt: 1 0 th century, highly detailed, concept art, cinematic lighting, unreal engine, trending on artstation, artstation hd, artstation hq, very very detailed*

Further testing to be done in this area (automated training with aesthetic predicting models, larger data collection about prompt scores, better training in general)
Also, enjoy this graphic I had to make myself because I kept being indecisive of the reward methodology  |
CrisLeaf/generador-de-historias-de-tolkien | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: YELP_ALBERT_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: train
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.9733333333333334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# YELP_ALBERT_5E
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1394
- Accuracy: 0.9733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4967 | 0.03 | 50 | 0.1667 | 0.9467 |
| 0.3268 | 0.06 | 100 | 0.2106 | 0.9133 |
| 0.3413 | 0.1 | 150 | 0.2107 | 0.9667 |
| 0.3172 | 0.13 | 200 | 0.1906 | 0.94 |
| 0.2804 | 0.16 | 250 | 0.2588 | 0.9 |
| 0.2604 | 0.19 | 300 | 0.2023 | 0.94 |
| 0.2532 | 0.22 | 350 | 0.1263 | 0.9533 |
| 0.2103 | 0.26 | 400 | 0.1233 | 0.96 |
| 0.212 | 0.29 | 450 | 0.2019 | 0.9267 |
| 0.2669 | 0.32 | 500 | 0.1110 | 0.9667 |
| 0.2187 | 0.35 | 550 | 0.1542 | 0.96 |
| 0.2203 | 0.38 | 600 | 0.0879 | 0.9733 |
| 0.2699 | 0.42 | 650 | 0.0971 | 0.9667 |
| 0.2107 | 0.45 | 700 | 0.0863 | 0.9667 |
| 0.2443 | 0.48 | 750 | 0.0823 | 0.9733 |
| 0.1987 | 0.51 | 800 | 0.1207 | 0.9733 |
| 0.2326 | 0.54 | 850 | 0.1368 | 0.9667 |
| 0.1787 | 0.58 | 900 | 0.1027 | 0.9667 |
| 0.2159 | 0.61 | 950 | 0.2443 | 0.9333 |
| 0.1316 | 0.64 | 1000 | 0.2035 | 0.9467 |
| 0.2416 | 0.67 | 1050 | 0.0882 | 0.9733 |
| 0.2008 | 0.7 | 1100 | 0.1709 | 0.9533 |
| 0.2065 | 0.74 | 1150 | 0.1098 | 0.9667 |
| 0.2391 | 0.77 | 1200 | 0.1055 | 0.9667 |
| 0.1533 | 0.8 | 1250 | 0.1997 | 0.94 |
| 0.2016 | 0.83 | 1300 | 0.0899 | 0.96 |
| 0.2016 | 0.86 | 1350 | 0.0957 | 0.9733 |
| 0.2316 | 0.9 | 1400 | 0.0784 | 0.98 |
| 0.1839 | 0.93 | 1450 | 0.0784 | 0.9733 |
| 0.2121 | 0.96 | 1500 | 0.1150 | 0.9733 |
| 0.1307 | 0.99 | 1550 | 0.0969 | 0.9733 |
| 0.1271 | 1.02 | 1600 | 0.2326 | 0.9467 |
| 0.1736 | 1.06 | 1650 | 0.0979 | 0.9667 |
| 0.1357 | 1.09 | 1700 | 0.0862 | 0.98 |
| 0.1871 | 1.12 | 1750 | 0.1419 | 0.9667 |
| 0.1411 | 1.15 | 1800 | 0.1301 | 0.96 |
| 0.1317 | 1.18 | 1850 | 0.1602 | 0.9533 |
| 0.1432 | 1.22 | 1900 | 0.1885 | 0.9533 |
| 0.1793 | 1.25 | 1950 | 0.0776 | 0.9667 |
| 0.1322 | 1.28 | 2000 | 0.0822 | 0.9733 |
| 0.1416 | 1.31 | 2050 | 0.0920 | 0.9733 |
| 0.1524 | 1.34 | 2100 | 0.0673 | 0.98 |
| 0.1338 | 1.38 | 2150 | 0.0602 | 0.98 |
| 0.152 | 1.41 | 2200 | 0.0916 | 0.98 |
| 0.1192 | 1.44 | 2250 | 0.0559 | 0.98 |
| 0.1471 | 1.47 | 2300 | 0.1096 | 0.9667 |
| 0.1267 | 1.5 | 2350 | 0.0695 | 0.9733 |
| 0.1776 | 1.54 | 2400 | 0.1363 | 0.96 |
| 0.1495 | 1.57 | 2450 | 0.0818 | 0.98 |
| 0.1158 | 1.6 | 2500 | 0.1282 | 0.9667 |
| 0.1772 | 1.63 | 2550 | 0.0682 | 0.9733 |
| 0.1187 | 1.66 | 2600 | 0.1032 | 0.9733 |
| 0.136 | 1.7 | 2650 | 0.1071 | 0.9667 |
| 0.1829 | 1.73 | 2700 | 0.0753 | 0.9667 |
| 0.1147 | 1.76 | 2750 | 0.1071 | 0.9733 |
| 0.1174 | 1.79 | 2800 | 0.1441 | 0.9667 |
| 0.0707 | 1.82 | 2850 | 0.1362 | 0.9667 |
| 0.1372 | 1.86 | 2900 | 0.1861 | 0.9533 |
| 0.2108 | 1.89 | 2950 | 0.0770 | 0.9733 |
| 0.2014 | 1.92 | 3000 | 0.1114 | 0.9667 |
| 0.1373 | 1.95 | 3050 | 0.1244 | 0.9667 |
| 0.1242 | 1.98 | 3100 | 0.1220 | 0.96 |
| 0.1267 | 2.02 | 3150 | 0.1139 | 0.9733 |
| 0.1021 | 2.05 | 3200 | 0.2013 | 0.9533 |
| 0.1091 | 2.08 | 3250 | 0.1027 | 0.9733 |
| 0.0648 | 2.11 | 3300 | 0.1464 | 0.9733 |
| 0.1207 | 2.14 | 3350 | 0.1255 | 0.9733 |
| 0.0833 | 2.18 | 3400 | 0.0708 | 0.98 |
| 0.0796 | 2.21 | 3450 | 0.1608 | 0.96 |
| 0.0624 | 2.24 | 3500 | 0.0827 | 0.98 |
| 0.0518 | 2.27 | 3550 | 0.0602 | 0.98 |
| 0.1242 | 2.3 | 3600 | 0.0752 | 0.9733 |
| 0.0422 | 2.34 | 3650 | 0.1000 | 0.9733 |
| 0.0748 | 2.37 | 3700 | 0.1171 | 0.9667 |
| 0.0839 | 2.4 | 3750 | 0.1341 | 0.9667 |
| 0.1033 | 2.43 | 3800 | 0.0744 | 0.98 |
| 0.0567 | 2.46 | 3850 | 0.0869 | 0.98 |
| 0.0756 | 2.5 | 3900 | 0.0745 | 0.98 |
| 0.0768 | 2.53 | 3950 | 0.0895 | 0.9733 |
| 0.0878 | 2.56 | 4000 | 0.0703 | 0.98 |
| 0.1023 | 2.59 | 4050 | 0.0806 | 0.98 |
| 0.0807 | 2.62 | 4100 | 0.0338 | 0.9867 |
| 0.0868 | 2.66 | 4150 | 0.0892 | 0.9667 |
| 0.0648 | 2.69 | 4200 | 0.1637 | 0.9533 |
| 0.0535 | 2.72 | 4250 | 0.1622 | 0.9667 |
| 0.0675 | 2.75 | 4300 | 0.1354 | 0.9733 |
| 0.1121 | 2.78 | 4350 | 0.1440 | 0.9533 |
| 0.0714 | 2.82 | 4400 | 0.1022 | 0.9467 |
| 0.0786 | 2.85 | 4450 | 0.1110 | 0.9733 |
| 0.0822 | 2.88 | 4500 | 0.1218 | 0.9733 |
| 0.1075 | 2.91 | 4550 | 0.1041 | 0.9733 |
| 0.0783 | 2.94 | 4600 | 0.0992 | 0.9733 |
| 0.1059 | 2.98 | 4650 | 0.1187 | 0.9733 |
| 0.067 | 3.01 | 4700 | 0.0931 | 0.9733 |
| 0.0425 | 3.04 | 4750 | 0.1252 | 0.9733 |
| 0.0539 | 3.07 | 4800 | 0.1152 | 0.9733 |
| 0.0419 | 3.1 | 4850 | 0.1534 | 0.9667 |
| 0.0462 | 3.13 | 4900 | 0.1398 | 0.9733 |
| 0.0435 | 3.17 | 4950 | 0.1168 | 0.98 |
| 0.0144 | 3.2 | 5000 | 0.1489 | 0.9667 |
| 0.0367 | 3.23 | 5050 | 0.1293 | 0.9733 |
| 0.0336 | 3.26 | 5100 | 0.1353 | 0.9733 |
| 0.0246 | 3.29 | 5150 | 0.0958 | 0.98 |
| 0.0181 | 3.33 | 5200 | 0.1294 | 0.9733 |
| 0.0357 | 3.36 | 5250 | 0.1209 | 0.9733 |
| 0.0683 | 3.39 | 5300 | 0.1748 | 0.96 |
| 0.0353 | 3.42 | 5350 | 0.2159 | 0.9533 |
| 0.0415 | 3.45 | 5400 | 0.1723 | 0.96 |
| 0.0336 | 3.49 | 5450 | 0.1031 | 0.98 |
| 0.0475 | 3.52 | 5500 | 0.0959 | 0.98 |
| 0.0393 | 3.55 | 5550 | 0.2163 | 0.96 |
| 0.0337 | 3.58 | 5600 | 0.1097 | 0.9733 |
| 0.0415 | 3.61 | 5650 | 0.1365 | 0.98 |
| 0.035 | 3.65 | 5700 | 0.1175 | 0.98 |
| 0.0448 | 3.68 | 5750 | 0.1543 | 0.9667 |
| 0.0445 | 3.71 | 5800 | 0.2005 | 0.96 |
| 0.0211 | 3.74 | 5850 | 0.1179 | 0.98 |
| 0.0198 | 3.77 | 5900 | 0.1298 | 0.9733 |
| 0.026 | 3.81 | 5950 | 0.2167 | 0.9667 |
| 0.0412 | 3.84 | 6000 | 0.1224 | 0.98 |
| 0.0446 | 3.87 | 6050 | 0.0798 | 0.98 |
| 0.0174 | 3.9 | 6100 | 0.0577 | 0.9933 |
| 0.0535 | 3.93 | 6150 | 0.1482 | 0.9667 |
| 0.0495 | 3.97 | 6200 | 0.0862 | 0.98 |
| 0.0267 | 4.0 | 6250 | 0.1190 | 0.98 |
| 0.0087 | 4.03 | 6300 | 0.0747 | 0.98 |
| 0.0102 | 4.06 | 6350 | 0.0753 | 0.9867 |
| 0.0178 | 4.09 | 6400 | 0.1812 | 0.9667 |
| 0.0088 | 4.13 | 6450 | 0.0817 | 0.98 |
| 0.0144 | 4.16 | 6500 | 0.0805 | 0.98 |
| 0.014 | 4.19 | 6550 | 0.0862 | 0.9867 |
| 0.0002 | 4.22 | 6600 | 0.0894 | 0.98 |
| 0.0112 | 4.25 | 6650 | 0.1004 | 0.9733 |
| 0.0054 | 4.29 | 6700 | 0.0832 | 0.9867 |
| 0.0001 | 4.32 | 6750 | 0.0812 | 0.9867 |
| 0.0202 | 4.35 | 6800 | 0.1828 | 0.9667 |
| 0.009 | 4.38 | 6850 | 0.1114 | 0.98 |
| 0.0001 | 4.41 | 6900 | 0.1295 | 0.98 |
| 0.0077 | 4.45 | 6950 | 0.1610 | 0.9733 |
| 0.0082 | 4.48 | 7000 | 0.1787 | 0.9667 |
| 0.0198 | 4.51 | 7050 | 0.1485 | 0.9733 |
| 0.0017 | 4.54 | 7100 | 0.1774 | 0.9733 |
| 0.0115 | 4.57 | 7150 | 0.1567 | 0.9733 |
| 0.0001 | 4.61 | 7200 | 0.1534 | 0.9733 |
| 0.0247 | 4.64 | 7250 | 0.2020 | 0.9667 |
| 0.0059 | 4.67 | 7300 | 0.1918 | 0.9667 |
| 0.0052 | 4.7 | 7350 | 0.1315 | 0.98 |
| 0.0076 | 4.73 | 7400 | 0.1289 | 0.98 |
| 0.0218 | 4.77 | 7450 | 0.1610 | 0.9733 |
| 0.0077 | 4.8 | 7500 | 0.1355 | 0.98 |
| 0.0096 | 4.83 | 7550 | 0.1378 | 0.9733 |
| 0.008 | 4.86 | 7600 | 0.1568 | 0.9733 |
| 0.0103 | 4.89 | 7650 | 0.1388 | 0.9733 |
| 0.0009 | 4.93 | 7700 | 0.1221 | 0.98 |
| 0.0287 | 4.96 | 7750 | 0.1448 | 0.9733 |
| 0.01 | 4.99 | 7800 | 0.1394 | 0.9733 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Crives/distilbert-base-uncased-finetuned-emotion | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: malay-patel/bert-finetuned-squad-nq
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# malay-patel/bert-finetuned-squad-nq
This model is a fine-tuned version of [nlpconnect/roberta-base-squad2-nq](https://huggingface.co/nlpconnect/roberta-base-squad2-nq) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5461
- Train End Logits Accuracy: 0.6253
- Train Start Logits Accuracy: 0.6120
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 861, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:-----:|
| 1.5548 | 0.6236 | 0.6172 | 0 |
| 1.5423 | 0.6286 | 0.6192 | 1 |
| 1.5461 | 0.6253 | 0.6120 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2 |
Crumped/imdb-simpleRNN | [
"keras"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### zrn-07-512-sd15-2e-6-800-woman-ddim on Stable Diffusion via Dreambooth
#### model by kingery
This your the Stable Diffusion model fine-tuned the zrn-07-512-sd15-2e-6-800-woman-ddim concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of yangguangkechuang woman**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:






|
Cryptikdw/DialoGPT-small-rick | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | Access to model Ivd/glevero is restricted and you are not in the authorized list. Visit https://huggingface.co/Ivd/glevero to ask for access. |
Crystal/distilbert-base-uncased-finetuned-squad | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: enlm-roberta-imdb-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# enlm-roberta-imdb-final
This model is a fine-tuned version of [manirai91/enlm-roberta-final](https://huggingface.co/manirai91/enlm-roberta-final) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Cthyllax/DialoGPT-medium-PaladinDanse | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
language: es
datasets:
- common_voice
- ciempiess_test
- hub4ne_es_LDC98S74
- callhome_es_LDC96S35
tags:
- audio
- automatic-speech-recognition
- spanish
- xlrs-53-spanish
- ciempiess
- cimpiess-unam
license: cc-by-4.0
widget:
model-index:
- name: wav2vec2-large-xlsr-53-spanish-ep5-944h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Mozilla Common Voice 10.0 (Test)
type: mozilla-foundation/common_voice_10_0
split: test
args:
language: es
metrics:
- name: WER
type: wer
value: 9.20
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Mozilla Common Voice 10.0 (Dev)
type: mozilla-foundation/common_voice_10_0
split: validation
args:
language: es
metrics:
- name: WER
type: wer
value: 8.02
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: CIEMPIESS-TEST
type: ciempiess/ciempiess_test
split: test
args:
language: es
metrics:
- name: WER
type: wer
value: 11.17
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: 1997 Spanish Broadcast News Speech (HUB4-NE)
type: HUB4NE_LDC98S74
split: test
args:
language: es
metrics:
- name: WER
type: wer
value: 7.48
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: CALLHOME Spanish Speech (Test)
type: callhome_LDC96S35
split: test
args:
language: es
metrics:
- name: WER
type: wer
value: 39.12
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: CALLHOME Spanish Speech (Dev)
type: callhome_LDC96S35
split: validation
args:
language: es
metrics:
- name: WER
type: wer
value: 40.39
---
# wav2vec2-large-xlsr-53-spanish-ep5-944h
The "wav2vec2-large-xlsr-53-spanish-ep5-944h" is an acoustic model suitable for Automatic Speech Recognition in Spanish. It is the result of fine-tuning the model "facebook/wav2vec2-large-xlsr-53" for 5 epochs with around 944 hours of Spanish data gathered or developed by the [CIEMPIESS-UNAM Project](https://huggingface.co/ciempiess) since 2012. Most of the data is available at the the CIEMPIESS-UNAM Project homepage http://www.ciempiess.org/. The rest can be found in public repositories such as [LDC](https://www.ldc.upenn.edu/) or [OpenSLR](https://openslr.org/)
The specific list of corpora used to fine-tune the model is:
- [CIEMPIESS-LIGHT (18h25m)](https://catalog.ldc.upenn.edu/LDC2017S23)
- [CIEMPIESS-BALANCE (18h20m)](https://catalog.ldc.upenn.edu/LDC2018S11)
- [CIEMPIESS-FEM (13h54m)](https://catalog.ldc.upenn.edu/LDC2019S07)
- [CHM150 (1h38m)](https://catalog.ldc.upenn.edu/LDC2016S04)
- [TEDX_SPANISH (24h29m)](https://openslr.org/67/)
- [LIBRIVOX_SPANISH (73h01m)](https://catalog.ldc.upenn.edu/LDC2020S01)
- [WIKIPEDIA_SPANISH (25h37m)](https://catalog.ldc.upenn.edu/LDC2021S07)
- [VOXFORGE_SPANISH (49h42m)](http://www.voxforge.org/es)
- [MOZILLA COMMON VOICE 10.0 (320h22m)](https://commonvoice.mozilla.org/es)
- [HEROICO (16h33m)](https://catalog.ldc.upenn.edu/LDC2006S37)
- [LATINO-40 (6h48m)](https://catalog.ldc.upenn.edu/LDC95S28)
- [CALLHOME_SPANISH (13h22m)](https://catalog.ldc.upenn.edu/LDC96S35)
- [HUB4NE_SPANISH (31h41m)](https://catalog.ldc.upenn.edu/LDC98S74)
- [FISHER_SPANISH (127h22m)](https://catalog.ldc.upenn.edu/LDC2010S01)
- [Chilean Spanish speech data set (7h08m)](https://openslr.org/71/)
- [Colombian Spanish speech data set (7h34m)](https://openslr.org/72/)
- [Peruvian Spanish speech data set (9h13m)](https://openslr.org/73/)
- [Argentinian Spanish speech data set (8h01m)](https://openslr.org/61/)
- [Puerto Rico Spanish speech data set (1h00m)](https://openslr.org/74/)
- [MediaSpeech Spanish (10h00m)](https://openslr.org/108/)
- [DIMEX100-LIGHT (6h09m)](https://turing.iimas.unam.mx/~luis/DIME/CORPUS-DIMEX.html)
- [DIMEX100-NIÑOS (08h09m)](https://turing.iimas.unam.mx/~luis/DIME/CORPUS-DIMEX.html)
- [GOLEM-UNIVERSUM (00h10m)](https://turing.iimas.unam.mx/~luis/DIME/CORPUS-DIMEX.html)
- [GLISSANDO (6h40m)](https://glissando.labfon.uned.es/es)
- TELE_con_CIENCIA (28h16m) **Unplished Material**
- UNSHAREABLE MATERIAL (118h22m) **Not available for sharing**
The fine-tuning process was performed during November (2022) in the servers of the Language and Voice Lab (https://lvl.ru.is/) at Reykjavík University (Iceland) by Carlos Daniel Hernández Mena.
# Evaluation
```python
import torch
from transformers import Wav2Vec2Processor
from transformers import Wav2Vec2ForCTC
#Load the processor and model.
MODEL_NAME="carlosdanielhernandezmena/wav2vec2-large-xlsr-53-spanish-ep5-944h"
processor = Wav2Vec2Processor.from_pretrained(MODEL_NAME)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_NAME)
#Load the dataset
from datasets import load_dataset, load_metric, Audio
ds=load_dataset("ciempiess/ciempiess_test", split="test")
#Downsample to 16kHz
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
#Process the dataset
def prepare_dataset(batch):
audio = batch["audio"]
#Batched output is "un-batched" to ensure mapping is correct
batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
with processor.as_target_processor():
batch["labels"] = processor(batch["normalized_text"]).input_ids
return batch
ds = ds.map(prepare_dataset, remove_columns=ds.column_names,num_proc=1)
#Define the evaluation metric
import numpy as np
wer_metric = load_metric("wer")
def compute_metrics(pred):
pred_logits = pred.predictions
pred_ids = np.argmax(pred_logits, axis=-1)
pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
pred_str = processor.batch_decode(pred_ids)
#We do not want to group tokens when computing the metrics
label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
wer = wer_metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
#Do the evaluation (with batch_size=1)
model = model.to(torch.device("cuda"))
def map_to_result(batch):
with torch.no_grad():
input_values = torch.tensor(batch["input_values"], device="cuda").unsqueeze(0)
logits = model(input_values).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_str"] = processor.batch_decode(pred_ids)[0]
batch["sentence"] = processor.decode(batch["labels"], group_tokens=False)
return batch
results = ds.map(map_to_result,remove_columns=ds.column_names)
#Compute the overall WER now.
print("Test WER: {:.3f}".format(wer_metric.compute(predictions=results["pred_str"], references=results["sentence"])))
```
**Test Result**: 0.112
# BibTeX entry and citation info
*When publishing results based on these models please refer to:*
```bibtex
@misc{mena2022xlrs53spanish,
title={Acoustic Model in Spanish: wav2vec2-large-xlsr-53-spanish-ep5-944h.},
author={Hernandez Mena, Carlos Daniel},
year={2022},
url={https://huggingface.co/carlosdanielhernandezmena/wav2vec2-large-xlsr-53-spanish-ep5-944h},
}
```
# Acknowledgements
The author wants to thank to the social service program ["Desarrollo de Tecnologías del Habla"](http://profesores.fi-b.unam.mx/carlos_mena/servicio.html) at the [Facultad de Ingeniería (FI)](https://www.ingenieria.unam.mx/) of the [Universidad Nacional Autónoma de México (UNAM)](https://www.unam.mx/). He also thanks to the social service students for all the hard work.
Special thanks to Jón Guðnason, head of the Language and Voice Lab for providing computational power to make this model possible. The author also thanks to the "Language Technology Programme for Icelandic 2019-2023" which is managed and coordinated by Almannarómur, and it is funded by the Icelandic Ministry of Education, Science and Culture.
|
Culmenus/XLMR-ENIS-finetuned-ner | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:mim_gold_ner",
"transformers",
"generated_from_trainer",
"license:agpl-3.0",
"model-index",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: hygpt2-clm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hygpt2-clm
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4000
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Culmenus/opus-mt-de-is-finetuned-de-to-is | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | 2022-12-01T08:37:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9273693534100974
- name: Recall
type: recall
value: 0.9370175634858485
- name: F1
type: f1
value: 0.932168493684269
- name: Accuracy
type: accuracy
value: 0.9839070964462167
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0602
- Precision: 0.9274
- Recall: 0.9370
- F1: 0.9322
- Accuracy: 0.9839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2431 | 1.0 | 878 | 0.0690 | 0.9174 | 0.9214 | 0.9194 | 0.9811 |
| 0.0525 | 2.0 | 1756 | 0.0606 | 0.9251 | 0.9348 | 0.9299 | 0.9830 |
| 0.0299 | 3.0 | 2634 | 0.0602 | 0.9274 | 0.9370 | 0.9322 | 0.9839 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_35g65cc_1 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.4864864864864865
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3905
- Accuracy: 0.4865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6128 | 0.97 | 15 | 1.3905 | 0.4865 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_35g65cc_2 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | Access to model sd-dreambooth-library/ssssssslacis is restricted and you are not in the authorized list. Visit https://huggingface.co/sd-dreambooth-library/ssssssslacis to ask for access. |
CyberMuffin/DialoGPT-small-ChandlerBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: cc-by-4.0
---
## Aina Project's Catalan-Spanish machine translation model for the administrative domain.
## Table of Contents
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-use)
- [How to Use](#how-to-use)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Data Preparation](#data-preparation)
- [Tokenization](#tokenization)
- [Hyperparameters](#hyperparameters)
- [Evaluation](#evaluation)
- [Variable and Metrics](#variable-and-metrics)
- [Evaluation Results](#evaluation-results)
- [Additional Information](#additional-information)
- [Author](#author)
- [Contact Information](#contact-information)
- [Copyright](#copyright)
- [Licensing Information](#licensing-information)
- [Funding](#funding)
- [Disclaimer](#disclaimer)
## Model description
This model is a finetuned version of [projecte-aina/mt-aina-ca-es](https://huggingface.co/projecte-aina/mt-aina-ca-es) for the administrative domain. Additionally, the model is evaluated on several public datasecomprising 5 different domains (general, adminstrative, technology, biomedical, and news).
## Intended uses and limitations
You can use this model for machine translation of administrative texts from Catalan to Spanish.
## How to use
### Usage
Required libraries:
```bash
pip install ctranslate2 pyonmttok
```
Translate a sentence using python
```python
import ctranslate2
import pyonmttok
from huggingface_hub import snapshot_download
model_dir = snapshot_download(repo_id="projecte-aina/mt-aina-ca-es-adm", revision="main")
tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/spm.model")
tokenized=tokenizer.tokenize("Benvingut al projecte Aina!")
translator = ctranslate2.Translator(model_dir)
translated = translator.translate_batch([tokenized[0]])
print(tokenizer.detokenize(translated[0][0]['tokens']))
```
## Training
### Training data
The original model was trained on a combination of the following datasets:
| Dataset | Sentences |
|-------------------|----------------|
| DOCG v2 | 8.472.786 |
| El Periodico | 6.483.106 |
| EuroParl | 1.876.669 |
| WikiMatrix | 1.421.077 |
| Wikimedia | 335.955 |
| QED | 71.867 |
| TED2020 v1 | 52.177 |
| CCMatrix v1 | 56.103.820 |
| MultiCCAligned v1 | 2.433.418 |
| ParaCrawl | 15.327.808 |
| **Total** | **92.578.683** |
This finetuned model is further trained using:
| Dataset | Sentences |
|-------------------|----------------|
| AINA AAPP | 62.773 |
### Training procedure
### Data preparation
All datasets are concatenated and filtered using the [mBERT Gencata parallel filter](https://huggingface.co/projecte-aina/mbert-base-gencata) and cleaned using the clean-corpus-n.pl script from [moses](https://github.com/moses-smt/mosesdecoder), allowing sentences between 5 and 150 words.
Before training, the punctuation is normalized using a modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py)
#### Tokenization
All data is tokenized using sentencepiece, with 50 thousand token sentencepiece model learned from the combination of all filtered training data. This model is included.
#### Hyperparameters
The model is based on the Transformer-XLarge proposed by [Subramanian et al.](https://aclanthology.org/2021.wmt-1.18.pdf)
The following hyperparamenters were set on the Fairseq toolkit:
| Hyperparameter | Value |
|------------------------------------|----------------------------------|
| Architecture | transformer_vaswani_wmt_en_de_big|
| Embedding size | 1024 |
| Feedforward size | 4096 |
| Number of heads | 16 |
| Encoder layers | 24 |
| Decoder layers | 6 |
| Normalize before attention | True |
| --share-decoder-input-output-embed | True |
| --share-all-embeddings | True |
| Effective batch size | 96.000 |
| Optimizer | adam |
| Adam betas | (0.9, 0.980) |
| Clip norm | 0.0 |
| Learning rate | 1e-3 |
| Lr. schedurer | inverse sqrt |
| Warmup updates | 4000 |
| Dropout | 0.1 |
| Label smoothing | 0.1 |
The original model was trained using shards of 10 million sentences, for a total of 13.000 updates. Weights were saved every 1000 updates and reported results are the average of the last 6 checkpoints.
For the finetuning, the model continues training for 9000 updates with reduced maximum learning rate of 5e-5.
## Evaluation
### Variable and metrics
We use the BLEU score for evaluation on test sets: [Flores-101](https://github.com/facebookresearch/flores), [TaCon](https://elrc-share.eu/repository/browse/tacon-spanish-constitution-mt-test-set/84a96138b98611ec9c1a00155d02670628f3e6857b0f422abd82abc3795ec8c2/), [United Nations](https://zenodo.org/record/3888414#.Y33-_tLMIW0), [Cybersecurity](https://elrc-share.eu/repository/browse/cyber-mt-test-set/2bd93faab98c11ec9c1a00155d026706b96a490ed3e140f0a29a80a08c46e91e/), [wmt19 biomedical test set](), [wmt13 news test set](https://elrc-share.eu/repository/browse/catalan-wmt2013-machine-translation-shared-task-test-set/84a96139b98611ec9c1a00155d0267061a0aa1b62e2248e89aab4952f3c230fc/), [aina aapp]()
### Evaluation results
Below are the evaluation results on the machine translation from Catalan to Spanish compared to [Softcatalà](https://www.softcatala.org/) and [Google Translate](https://translate.google.es/?hl=es):
| Test set | SoftCatalà | Google Translate | mt-aina-ca-es | mt-aina-ca-es-adm |
|----------------------|------------|------------------|---------------|-------------------|
| Spanish Constitution | 70,7 | **77,1** | 75,5 | 67,2 |
| United Nations | 78,1 | 84,3 | **86,3** | 78,0 |
| Flores 101 dev | 23,5 | 24 | **24,1** | 22,4 |
| Flores 101 devtest | 24,1 | 24,2 | **24,4** | 22,9 |
| Cybersecurity | 67,3 | **76,9** | 75,1 | 65,7 |
| wmt 19 biomedical | 60,4 | 62,7 | **63,0** | 59,2 |
| wmt 13 news | 22,5 | 23,1 | **23,4** | 20,4 |
| aina_aapp | 80,9 | 81,4 | 82,8 | **85,5** |
| Average | 53,4 | 56,7 | **56,7** | 52,6 |
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected])
### Contact information
For further information, send an email to [email protected]
### Copyright
Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center
### Licensing Information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
## Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
|
DHBaek/gpt2-stackoverflow-question-contents-generator | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
datasets:
- tweet_eval
metrics:
- f1
- accuracy
model-index:
- name: cardiffnlp/twitter-roberta-base-dec2021-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: emotion
split: test
metrics:
- name: Micro F1 (tweet_eval/emotion)
type: micro_f1_tweet_eval/emotion
value: 0.8451794510907812
- name: Macro F1 (tweet_eval/emotion)
type: micro_f1_tweet_eval/emotion
value: 0.8173778863357652
- name: Accuracy (tweet_eval/emotion)
type: accuracy_tweet_eval/emotion
value: 0.8451794510907812
pipeline_tag: text-classification
widget:
- text: Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}
example_title: "topic_classification 1"
- text: Yes, including Medicare and social security saving👍
example_title: "sentiment 1"
- text: All two of them taste like ass.
example_title: "offensive 1"
- text: If you wanna look like a badass, have drama on social media
example_title: "irony 1"
- text: Whoever just unfollowed me you a bitch
example_title: "hate 1"
- text: I love swimming for the same reason I love meditating...the feeling of weightlessness.
example_title: "emotion 1"
- text: Beautiful sunset last night from the pontoon @TupperLakeNY
example_title: "emoji 1"
---
# cardiffnlp/twitter-roberta-base-dec2021-emotion
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-dec2021](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021) on the
[`tweet_eval (emotion)`](https://huggingface.co/datasets/tweet_eval)
via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp).
Training split is `train` and parameters have been tuned on the validation split `validation`.
Following metrics are achieved on the test split `test` ([link](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-emotion/raw/main/metric.json)).
- F1 (micro): 0.8451794510907812
- F1 (macro): 0.8173778863357652
- Accuracy: 0.8451794510907812
### Usage
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp
model = tweetnlp.Classifier("cardiffnlp/twitter-roberta-base-dec2021-emotion", max_length=128)
model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')
```
### Reference
```
@inproceedings{camacho-collados-etal-2022-tweetnlp,
title={{T}weet{NLP}: {C}utting-{E}dge {N}atural {L}anguage {P}rocessing for {S}ocial {M}edia},
author={Camacho-Collados, Jose and Rezaee, Kiamehr and Riahi, Talayeh and Ushio, Asahi and Loureiro, Daniel and Antypas, Dimosthenis and Boisson, Joanne and Espinosa-Anke, Luis and Liu, Fangyu and Mart{'\i}nez-C{'a}mara, Eugenio and others},
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
DSI/ar_emotion_6 | [
"pytorch",
"bert",
"transformers"
]
| null | {
"architectures": [
"BertForMultiLabelSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilbert-base-uncased-finetuned-cola-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola-v5
This model is a fine-tuned version of [MGanesh29/distilbert-base-uncased-finetuned-cola-v5](https://huggingface.co/MGanesh29/distilbert-base-uncased-finetuned-cola-v5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2563
- Accuracy: 0.9310
- Precision: 0.9310
- Recall: 0.9310
- F1: 0.9310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 6.25 | 50 | 0.2638 | 0.9310 | 0.9310 | 0.9310 | 0.9310 |
| No log | 12.5 | 100 | 0.2607 | 0.9310 | 0.9310 | 0.9310 | 0.9310 |
| No log | 18.75 | 150 | 0.2643 | 0.9310 | 0.9310 | 0.9310 | 0.9310 |
| No log | 25.0 | 200 | 0.2563 | 0.9310 | 0.9310 | 0.9310 | 0.9310 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
DSI/personal_sentiment | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 970 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 970,
"warmup_steps": 97,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 4096, 'do_lower_case': False}) with Transformer model: LongformerModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support | [
"pytorch",
"jax",
"bert",
"text-classification",
"multilingual",
"nl",
"fr",
"en",
"arxiv:2104.09947",
"transformers",
"Tweets",
"Sentiment analysis"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: enlm-roberta-conll2003-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# enlm-roberta-conll2003-final
This model is a fine-tuned version of [manirai91/enlm-roberta-final](https://huggingface.co/manirai91/enlm-roberta-final) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
DTAI-KULeuven/robbertje-1-gb-merged | [
"pytorch",
"roberta",
"fill-mask",
"nl",
"dataset:oscar",
"dataset:oscar (NL)",
"dataset:dbrd",
"dataset:lassy-ud",
"dataset:europarl-mono",
"dataset:conll2002",
"arxiv:2101.05716",
"transformers",
"Dutch",
"Flemish",
"RoBERTa",
"RobBERT",
"RobBERTje",
"license:mit",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: roberta-news-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-news-classifier
This model is a fine-tuned version of [russellc/roberta-news-classifier](https://huggingface.co/russellc/roberta-news-classifier) on the custom(Kaggle) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1043
- Accuracy: 0.9786
- F1: 0.9786
- Precision: 0.9786
- Recall: 0.9786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.1327 | 1.0 | 123 | 0.1043 | 0.9786 | 0.9786 | 0.9786 | 0.9786 |
| 0.1103 | 2.0 | 246 | 0.1157 | 0.9735 | 0.9735 | 0.9735 | 0.9735 |
| 0.102 | 3.0 | 369 | 0.1104 | 0.9735 | 0.9735 | 0.9735 | 0.9735 |
| 0.0825 | 4.0 | 492 | 0.1271 | 0.9714 | 0.9714 | 0.9714 | 0.9714 |
| 0.055 | 5.0 | 615 | 0.1296 | 0.9724 | 0.9724 | 0.9724 | 0.9724 |
### Evaluation results
***** Running Prediction *****
Num examples = 980
Batch size = 64
precision recall f1-score support
dunya 0.99 0.96 0.97 147
ekonomi 0.96 0.96 0.96 141
kultur 0.97 0.99 0.98 142
saglik 0.99 0.98 0.98 148
siyaset 0.98 0.98 0.98 134
spor 1.00 1.00 1.00 139
teknoloji 0.96 0.98 0.97 129
accuracy -- -- 0.98 980
macro avg 0.98 0.98 0.98 980
weighted avg 0.98 0.98 0.98 980
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
alexandrainst/da-hatespeech-classification-base | [
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"transformers",
"license:cc-by-sa-4.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 866 | null | ---
datasets:
- tweet_eval
metrics:
- f1
- accuracy
model-index:
- name: cardiffnlp/twitter-roberta-base-dec2021-sentiment
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: sentiment
split: test
metrics:
- name: Micro F1 (tweet_eval/sentiment)
type: micro_f1_tweet_eval/sentiment
value: 0.7128785411917942
- name: Macro F1 (tweet_eval/sentiment)
type: micro_f1_tweet_eval/sentiment
value: 0.7149679965048391
- name: Accuracy (tweet_eval/sentiment)
type: accuracy_tweet_eval/sentiment
value: 0.7128785411917942
pipeline_tag: text-classification
widget:
- text: Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}
example_title: "topic_classification 1"
- text: Yes, including Medicare and social security saving👍
example_title: "sentiment 1"
- text: All two of them taste like ass.
example_title: "offensive 1"
- text: If you wanna look like a badass, have drama on social media
example_title: "irony 1"
- text: Whoever just unfollowed me you a bitch
example_title: "hate 1"
- text: I love swimming for the same reason I love meditating...the feeling of weightlessness.
example_title: "emotion 1"
- text: Beautiful sunset last night from the pontoon @TupperLakeNY
example_title: "emoji 1"
---
# cardiffnlp/twitter-roberta-base-dec2021-sentiment
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-dec2021](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021) on the
[`tweet_eval (sentiment)`](https://huggingface.co/datasets/tweet_eval)
via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp).
Training split is `train` and parameters have been tuned on the validation split `validation`.
Following metrics are achieved on the test split `test` ([link](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-sentiment/raw/main/metric.json)).
- F1 (micro): 0.7128785411917942
- F1 (macro): 0.7149679965048391
- Accuracy: 0.7128785411917942
### Usage
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp
model = tweetnlp.Classifier("cardiffnlp/twitter-roberta-base-dec2021-sentiment", max_length=128)
model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')
```
### Reference
```
@inproceedings{camacho-collados-etal-2022-tweetnlp,
title={{T}weet{NLP}: {C}utting-{E}dge {N}atural {L}anguage {P}rocessing for {S}ocial {M}edia},
author={Camacho-Collados, Jose and Rezaee, Kiamehr and Riahi, Talayeh and Ushio, Asahi and Loureiro, Daniel and Antypas, Dimosthenis and Boisson, Joanne and Espinosa-Anke, Luis and Liu, Fangyu and Mart{'\i}nez-C{'a}mara, Eugenio and others},
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
alexandrainst/da-hatespeech-detection-base | [
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"transformers",
"license:cc-by-sa-4.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,719 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: DistilBERT-POWO_MGH_Lifecycle_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERT-POWO_MGH_Lifecycle_Finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0716 | 1.0 | 1625 | 0.0843 |
| 0.0695 | 2.0 | 3250 | 0.0701 |
| 0.0603 | 3.0 | 4875 | 0.0728 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
alexandrainst/da-sentiment-base | [
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"arxiv:1910.09700",
"transformers",
"license:cc-by-sa-4.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,432 | 2022-12-01T11:44:39Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: SimQA-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# SimQA-roberta-base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1454
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 597, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.7101 | 0 |
| 0.1836 | 1 |
| 0.1454 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
alexandrainst/da-subjectivivity-classification-base | [
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"dataset:DDSC/twitter-sent",
"dataset:DDSC/europarl",
"transformers",
"license:cc-by-sa-4.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 846 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- zeroth_korean_asr
model-index:
- name: wav2vec2-large-xls-r-300m-zeroth
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-zeroth
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the zeroth_korean_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7052
- Wer: 0.4621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 15.1763 | 1.61 | 400 | 4.6768 | 1.0 |
| 3.1779 | 3.21 | 800 | 1.6680 | 0.8752 |
| 1.052 | 4.82 | 1200 | 0.9580 | 0.7332 |
| 0.5412 | 6.42 | 1600 | 0.7752 | 0.5993 |
| 0.3281 | 8.03 | 2000 | 0.7158 | 0.5615 |
| 0.2312 | 9.64 | 2400 | 0.6975 | 0.5532 |
| 0.2001 | 11.24 | 2800 | 0.7489 | 0.5677 |
| 0.1587 | 12.85 | 3200 | 0.6954 | 0.5267 |
| 0.1321 | 14.46 | 3600 | 0.7329 | 0.5371 |
| 0.1178 | 16.06 | 4000 | 0.7534 | 0.5341 |
| 0.103 | 17.67 | 4400 | 0.7046 | 0.5066 |
| 0.0843 | 19.28 | 4800 | 0.7507 | 0.5028 |
| 0.079 | 20.88 | 5200 | 0.7137 | 0.4886 |
| 0.0647 | 22.49 | 5600 | 0.7170 | 0.4855 |
| 0.0565 | 24.1 | 6000 | 0.7124 | 0.4781 |
| 0.0487 | 25.7 | 6400 | 0.7043 | 0.4721 |
| 0.0433 | 27.31 | 6800 | 0.7128 | 0.4557 |
| 0.0379 | 28.91 | 7200 | 0.7052 | 0.4621 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
DanL/scientific-challenges-and-directions | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:DanL/scientific-challenges-and-directions-dataset",
"arxiv:2108.13751",
"transformers",
"generated_from_trainer"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 134 | null | ---
language:
- it
tags:
- Biomedical Language Modeling
widget:
- text: "L'asma allergica è una patologia dell'[MASK] respiratorio causata dalla presenza di allergeni responsabili dell'infiammazione dell'albero bronchiale."
example_title: "Example 1"
- text: "Il pancreas produce diversi [MASK] molto importanti tra i quali l'insulina e il glucagone."
example_title: "Example 2"
- text: "Il GABA è un amminoacido ed è il principale neurotrasmettitore inibitorio del [MASK]."
example_title: "Example 3"
--- |
Danih1502/t5-small-finetuned-en-to-de | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
datasets:
- tweet_eval
metrics:
- f1
- accuracy
model-index:
- name: cardiffnlp/twitter-roberta-base-dec2021-hate
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: hate
split: test
metrics:
- name: Micro F1 (tweet_eval/hate)
type: micro_f1_tweet_eval/hate
value: 0.5666666666666667
- name: Macro F1 (tweet_eval/hate)
type: micro_f1_tweet_eval/hate
value: 0.5411020518761093
- name: Accuracy (tweet_eval/hate)
type: accuracy_tweet_eval/hate
value: 0.5666666666666667
pipeline_tag: text-classification
widget:
- text: Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}
example_title: "topic_classification 1"
- text: Yes, including Medicare and social security saving👍
example_title: "sentiment 1"
- text: All two of them taste like ass.
example_title: "offensive 1"
- text: If you wanna look like a badass, have drama on social media
example_title: "irony 1"
- text: Whoever just unfollowed me you a bitch
example_title: "hate 1"
- text: I love swimming for the same reason I love meditating...the feeling of weightlessness.
example_title: "emotion 1"
- text: Beautiful sunset last night from the pontoon @TupperLakeNY
example_title: "emoji 1"
---
# cardiffnlp/twitter-roberta-base-dec2021-hate
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-dec2021](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021) on the
[`tweet_eval (hate)`](https://huggingface.co/datasets/tweet_eval)
via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp).
Training split is `train` and parameters have been tuned on the validation split `validation`.
Following metrics are achieved on the test split `test` ([link](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-hate/raw/main/metric.json)).
- F1 (micro): 0.5666666666666667
- F1 (macro): 0.5411020518761093
- Accuracy: 0.5666666666666667
### Usage
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp
model = tweetnlp.Classifier("cardiffnlp/twitter-roberta-base-dec2021-hate", max_length=128)
model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')
```
### Reference
```
@inproceedings{camacho-collados-etal-2022-tweetnlp,
title={{T}weet{NLP}: {C}utting-{E}dge {N}atural {L}anguage {P}rocessing for {S}ocial {M}edia},
author={Camacho-Collados, Jose and Rezaee, Kiamehr and Riahi, Talayeh and Ushio, Asahi and Loureiro, Daniel and Antypas, Dimosthenis and Boisson, Joanne and Espinosa-Anke, Luis and Liu, Fangyu and Mart{'\i}nez-C{'a}mara, Eugenio and others},
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
DannyMichael/ECU911 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: it
license: afl-3.0
widget:
- text: Il <mask> ha chiesto revocarsi l'obbligo di pagamento
---
<img src="https://huggingface.co/dlicari/Italian-Legal-BERT-SC/resolve/main/ITALIAN_LEGAL_BERT-SC.jpg" width="600"/>
# ITALIAN-LEGAL-BERT-SC
It is the [ITALIAN-LEGAL-BERT](https://huggingface.co/dlicari/Italian-Legal-BERT) variant pre-trained from scratch on Italian legal documents (ITA-LEGAL-BERT-SC) based on the CamemBERT architecture
## Training procedure
It was trained from scratch using a larger training dataset, 6.6GB of civil and criminal cases.
We used [CamemBERT](https://huggingface.co/docs/transformers/main/en/model_doc/camembert) architecture with a language modeling head on top, AdamW Optimizer, initial learning rate 2e-5 (with linear learning rate decay), sequence length 512, batch size 18, 1 million training steps,
device 8*NVIDIA A100 40GB using distributed data parallel (each step performs 8 batches). It uses SentencePiece tokenization trained from scratch on a subset of training set (5 milions sentences)
and vocabulary size of 32000
<h2> Usage </h2>
ITALIAN-LEGAL-BERT model can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dlicari/Italian-Legal-BERT-SC"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
You can use the Transformers library fill-mask pipeline to do inference with ITALIAN-LEGAL-BERT.
```python
# %pip install sentencepiece
# %pip install transformers
from transformers import pipeline
model_name = "dlicari/Italian-Legal-BERT-SC"
fill_mask = pipeline("fill-mask", model_name)
fill_mask("Il <mask> ha chiesto revocarsi l'obbligo di pagamento")
# [{'score': 0.6529251933097839,'token_str': 'ricorrente',
# {'score': 0.0380014143884182, 'token_str': 'convenuto',
# {'score': 0.0360226035118103, 'token_str': 'richiedente',
# {'score': 0.023908283561468124,'token_str': 'Condominio',
# {'score': 0.020863816142082214, 'token_str': 'lavoratore'}]
``` |
DavidSpaceG/MSGIFSR | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- big_patent
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-Big-Patent-h
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: big_patent
type: big_patent
config: h
split: train
args: h
metrics:
- name: Rouge1
type: rouge
value: 33.9091
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-Big-Patent-h
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the big_patent dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2622
- Rouge1: 33.9091
- Rouge2: 14.1731
- Rougel: 30.105
- Rougelsum: 30.3666
## Model description
In this project, we fine-tuned mT5small, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages.
The model was fine-tuned on the electric patent corpus using a variety of techniques, including transfer learning, data augmentation, and hyperparameter tuning.
## Intended uses & limitations
The fine-tuned model showed significant improvements in performance on the electric patent-specific tasks compared to the original pre-trained model.
Note: This project is suitable for researchers who are working on electric patent, as it's fine-tuned on electric patents and it can be used for related NLP problems for electric patent and electric patent research.
## Training and evaluation data
A subset of electric patents were used to fine-tune the model.
The fine-tuned model was evaluated using the ROUGE metric on a variety of natural language processing tasks specific to the patent domain, including, named entity recognition, and summarization.
## Training procedure
The model was fine-tuned on the electric patent corpus using a variety of techniques, including transfer learning, data augmentation, and hyperparameter tuning.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.5817 | 1.0 | 1071 | 2.3830 | 32.8521 | 13.2087 | 29.5594 | 29.7744 |
| 2.5657 | 2.0 | 2142 | 2.3345 | 33.9434 | 14.0573 | 30.0135 | 30.2533 |
| 2.4915 | 3.0 | 3213 | 2.2761 | 33.2033 | 13.2053 | 29.5126 | 29.8023 |
| 2.4365 | 4.0 | 4284 | 2.3041 | 33.8649 | 13.6629 | 30.0377 | 30.257 |
| 2.3952 | 5.0 | 5355 | 2.2722 | 33.9208 | 13.8018 | 30.1035 | 30.3432 |
| 2.3628 | 6.0 | 6426 | 2.2850 | 33.883 | 13.9537 | 30.0579 | 30.2417 |
| 2.3474 | 7.0 | 7497 | 2.2858 | 33.7201 | 14.0808 | 30.0762 | 30.255 |
| 2.331 | 8.0 | 8568 | 2.2622 | 33.9091 | 14.1731 | 30.105 | 30.3666 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Davlan/bert-base-multilingual-cased-finetuned-swahili | [
"pytorch",
"tf",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 67 | null | ---
datasets:
- tweet_eval
metrics:
- f1
- accuracy
model-index:
- name: cardiffnlp/twitter-roberta-base-2021-124m-sentiment
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: sentiment
split: test
metrics:
- name: Micro F1 (tweet_eval/sentiment)
type: micro_f1_tweet_eval/sentiment
value: 0.7133669814392706
- name: Macro F1 (tweet_eval/sentiment)
type: micro_f1_tweet_eval/sentiment
value: 0.7158353597305398
- name: Accuracy (tweet_eval/sentiment)
type: accuracy_tweet_eval/sentiment
value: 0.7133669814392706
pipeline_tag: text-classification
widget:
- text: Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}
example_title: "topic_classification 1"
- text: Yes, including Medicare and social security saving👍
example_title: "sentiment 1"
- text: All two of them taste like ass.
example_title: "offensive 1"
- text: If you wanna look like a badass, have drama on social media
example_title: "irony 1"
- text: Whoever just unfollowed me you a bitch
example_title: "hate 1"
- text: I love swimming for the same reason I love meditating...the feeling of weightlessness.
example_title: "emotion 1"
- text: Beautiful sunset last night from the pontoon @TupperLakeNY
example_title: "emoji 1"
---
# cardiffnlp/twitter-roberta-base-2021-124m-sentiment
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2021-124m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) on the
[`tweet_eval (sentiment)`](https://huggingface.co/datasets/tweet_eval)
via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp).
Training split is `train` and parameters have been tuned on the validation split `validation`.
Following metrics are achieved on the test split `test` ([link](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m-sentiment/raw/main/metric.json)).
- F1 (micro): 0.7133669814392706
- F1 (macro): 0.7158353597305398
- Accuracy: 0.7133669814392706
### Usage
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp
model = tweetnlp.Classifier("cardiffnlp/twitter-roberta-base-2021-124m-sentiment", max_length=128)
model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')
```
### Reference
```
@inproceedings{camacho-collados-etal-2022-tweetnlp,
title={{T}weet{NLP}: {C}utting-{E}dge {N}atural {L}anguage {P}rocessing for {S}ocial {M}edia},
author={Camacho-Collados, Jose and Rezaee, Kiamehr and Riahi, Talayeh and Ushio, Asahi and Loureiro, Daniel and Antypas, Dimosthenis and Boisson, Joanne and Espinosa-Anke, Luis and Liu, Fangyu and Mart{'\i}nez-C{'a}mara, Eugenio and others},
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
Davlan/bert-base-multilingual-cased-ner-hrl | [
"pytorch",
"tf",
"bert",
"token-classification",
"transformers",
"autotrain_compatible",
"has_space"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 269,898 | null | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
inference: true
license: creativeml-openrail-m
datasets:
- Guizmus/AnimeChanStyle
- skytnt/fbanimehq
- skytnt/anime-segmentation
- Nerfgun3/bad_prompt
- Nerfgun3/shatter_style
- Nerfgun3/ouroboros_embeddings
- cattoroboto/waifudiffusion-marine-textual-inversion
- waifu-research-department/regularization
- waifu-research-department/embeddings
library_name: diffusers
pipeline_tag: text-to-image
--- |
Davlan/distilbert-base-multilingual-cased-masakhaner | [
"pytorch",
"tf",
"distilbert",
"token-classification",
"arxiv:2103.11811",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | null | ---
license: bigscience-bloom-rail-1.0
tags:
- stable-diffusion
- diffusion
model-index:
- name: bloom-560m-RLHF-SD2-prompter
results: []
datasets:
- Gustavosta/Stable-Diffusion-Prompts
widget:
- text: "<s>Prompt: "
inference:
parameters:
eos_token_id: 2
max_length: 128
do_sample: true
---
# The RAT (RLHF-Aesthetic Tuned model for prompt synthesis)
**COLAB DEMO INCLUDING STABLE DIFFUSION: https://colab.research.google.com/github/aicrumb/doohickey/blob/main/rlhf_prompt_tuner.ipynb**
This is a further finetuned version of [crumb/bloom-560m-RLHF-SD2-prompter](https://hf.co/crumb/bloom-560m-RLHF-SD2-prompter) to optimize for aesthetic score with models from https://github.com/crowsonkb/simulacra-aesthetic-models instead of me hand scoring each image
donate so i can do this on real hardware : https://github.com/aicrumb/aicrumb/blob/main/README.md
trained at bs=32, lr=0.0001, only tuning biases and layernorm weights
## Example usage
```python
# Install libraries needed to run the models
!pip install transformers diffusers accelerate -qq
# Import the libraries
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
from transformers import pipeline
import torch
# This is the model that the transformer was finetuned to generate prompts for
model_id = "stabilityai/stable-diffusion-2-base"
# Use the Euler scheduler here
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, revision="fp16", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
# Load the transformer model
prompt_pipe = pipeline("text-generation", model="crumb/bloom-560m-RLHF-SD2-prompter-aesthetic")
prompt = "cool landscape"
# Auto-complete prompt
prompt = "<s>Prompt: " + prompt + ","
extended_prompt = prompt_pipe(prompt, do_sample=True, max_length=42)[0]['generated_text']
extended_prompt = extended_prompt[10:]
print("Prompt is now: ", extended_prompt)
# Generate image
image = pipe(extended_prompt).images[0]
image.save("output.png")
image
```
## Limitations
Aesthetic scoring models have been shown to have very large biases, and one I noticed is it really likes images of women no matter the actual quality, so those were optimized for more than other things.
Also it fell into the trap of rlhf models, it gets kinda same-ey, so if you don't like the general "stable diffusion, trending on artstation" look this might not be for you. |
Davlan/m2m100_418M-yor-eng-mt | [
"pytorch",
"m2m_100",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"M2M100ForConditionalGeneration"
],
"model_type": "m2m_100",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | Access to model Anko2/IA_Trend is restricted and you are not in the authorized list. Visit https://huggingface.co/Anko2/IA_Trend to ask for access. |
Davlan/mT5_base_yoruba_adr | [
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2003.10564",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: other
tags:
- stable-diffusion
- text-to-image
- core-ml
---
# Stable Diffusion v2 Model Card
This model was generated by Hugging Face using [Apple’s repository](https://github.com/apple/ml-stable-diffusion) which has [ASCL](https://github.com/apple/ml-stable-diffusion/blob/main/LICENSE.md).
This model card focuses on the model associated with the Stable Diffusion v2 model, available [here](https://github.com/Stability-AI/stablediffusion).
The model is trained from scratch 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`. Then it is further trained for 850k steps at resolution `512x512` on the same dataset on images with resolution `>= 512x512`.

These weights here have been converted to Core ML for use on Apple Silicon hardware.
There are 4 variants of the Core ML weights:
```
coreml-stable-diffusion-2-base
├── original
│ ├── compiled # Swift inference, "original" attention
│ └── packages # Python inference, "original" attention
└── split_einsum
├── compiled # Swift inference, "split_einsum" attention
└── packages # Python inference, "split_einsum" attention
```
Please, refer to https://huggingface.co/blog/diffusers-coreml for details.
- Use it with 🧨 [`diffusers`](https://huggingface.co/stabilityai/stable-diffusion-2-base#examples)
- Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `512-base-ema.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/main/512-base-ema.ckpt).
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)).
- **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a subset of the large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section).
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion vw was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic.
**Training Procedure**
Stable Diffusion v2 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through the OpenCLIP-ViT/H text-encoder.
- The output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We also use the so-called _v-objective_, see https://arxiv.org/abs/2202.00512.
We currently provide the following checkpoints:
- `512-base-ema.ckpt`: 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`.
850k steps at resolution `512x512` on the same dataset with resolution `>= 512x512`.
- `768-v-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for 150k steps using a [v-objective](https://arxiv.org/abs/2202.00512) on the same dataset. Resumed for another 140k steps on a `768x768` subset of our dataset.
- `512-depth-ema.ckpt`: Resumed from `512-base-ema.ckpt` and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning.
The additional input channels of the U-Net which process this extra information were zero-initialized.
- `512-inpainting-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for another 200k steps. Follows the mask-generation strategy presented in [LAMA](https://github.com/saic-mdal/lama) which, in combination with the latent VAE representations of the masked image, are used as an additional conditioning.
The additional input channels of the U-Net which process this extra information were zero-initialized. The same strategy was used to train the [1.5-inpainting checkpoint](https://github.com/saic-mdal/lama).
- `x4-upscaling-ema.ckpt`: Trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752).
In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml).
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 1
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 steps DDIM sampling steps show the relative improvements of the checkpoints:

Evaluated using 50 DDIM steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 200000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq.
## Citation
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
*This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).* |
Davlan/mbart50-large-eng-yor-mt | [
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
datasets:
- tweet_eval
metrics:
- f1
- accuracy
model-index:
- name: cardiffnlp/twitter-roberta-base-2021-124m-hate
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: hate
split: test
metrics:
- name: Micro F1 (tweet_eval/hate)
type: micro_f1_tweet_eval/hate
value: 0.5606060606060606
- name: Macro F1 (tweet_eval/hate)
type: micro_f1_tweet_eval/hate
value: 0.5319403309512811
- name: Accuracy (tweet_eval/hate)
type: accuracy_tweet_eval/hate
value: 0.5606060606060606
pipeline_tag: text-classification
widget:
- text: Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}
example_title: "topic_classification 1"
- text: Yes, including Medicare and social security saving👍
example_title: "sentiment 1"
- text: All two of them taste like ass.
example_title: "offensive 1"
- text: If you wanna look like a badass, have drama on social media
example_title: "irony 1"
- text: Whoever just unfollowed me you a bitch
example_title: "hate 1"
- text: I love swimming for the same reason I love meditating...the feeling of weightlessness.
example_title: "emotion 1"
- text: Beautiful sunset last night from the pontoon @TupperLakeNY
example_title: "emoji 1"
---
# cardiffnlp/twitter-roberta-base-2021-124m-hate
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2021-124m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) on the
[`tweet_eval (hate)`](https://huggingface.co/datasets/tweet_eval)
via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp).
Training split is `train` and parameters have been tuned on the validation split `validation`.
Following metrics are achieved on the test split `test` ([link](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m-hate/raw/main/metric.json)).
- F1 (micro): 0.5606060606060606
- F1 (macro): 0.5319403309512811
- Accuracy: 0.5606060606060606
### Usage
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp
model = tweetnlp.Classifier("cardiffnlp/twitter-roberta-base-2021-124m-hate", max_length=128)
model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')
```
### Reference
```
@inproceedings{camacho-collados-etal-2022-tweetnlp,
title={{T}weet{NLP}: {C}utting-{E}dge {N}atural {L}anguage {P}rocessing for {S}ocial {M}edia},
author={Camacho-Collados, Jose and Rezaee, Kiamehr and Riahi, Talayeh and Ushio, Asahi and Loureiro, Daniel and Antypas, Dimosthenis and Boisson, Joanne and Espinosa-Anke, Luis and Liu, Fangyu and Mart{'\i}nez-C{'a}mara, Eugenio and others},
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.