modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Aries/T5_question_answering | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 5 | 2022-10-17T10:54:04Z | ---
license: mit
---
### flatic on Stable Diffusion
This is the `<flat-ct>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
asaakyan/mbart-poetic-all | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
---
### cat_toy_test on Stable Diffusion via Dreambooth
#### model by qiufeng
This your the Stable Diffusion model fine-tuned the cat_toy_test concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of sks toy**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:




|
Arnold/common_voiceha | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- amh
tags:
- Amharic
- Word Piece Tokenizer
- Tokenizer
license: cc-by-4.0
---
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("israel/AmhWordPieceTokenizer")
encoding = tokenizer.encode("ኮሌጁ ቢያስተምርም ወደስራ የሚመድባቸው መንግስት ነው abcs")
encoding.tokens
``` |
Arnold/wav2vec2-large-xlsr-turkish-demo-colab | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-10-17T11:27:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: DistilBERT-POWO_Epiphyte_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERT-POWO_Epiphyte_Finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0810
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0825 | 1.0 | 2063 | 0.0824 |
| 0.0729 | 2.0 | 4126 | 0.0825 |
| 0.061 | 3.0 | 6189 | 0.0810 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
AryanLala/autonlp-Scientific_Title_Generator-34558227 | [
"pytorch",
"pegasus",
"text2text-generation",
"en",
"dataset:AryanLala/autonlp-data-Scientific_Title_Generator",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible",
"has_space"
]
| text2text-generation | {
"architectures": [
"PegasusForConditionalGeneration"
],
"model_type": "pegasus",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 103 | 2022-10-17T12:23:09Z | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-17102022-cert_update_date
co2_eq_emissions:
emissions: 18.37074974959855
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1786462003
- CO2 Emissions (in grams): 18.3707
## Validation Metrics
- Loss: 0.019
- Accuracy: 0.995
- Precision: 0.835
- Recall: 0.867
- F1: 0.851
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-17102022-cert_update_date-1786462003
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-17102022-cert_update_date-1786462003", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-17102022-cert_update_date-1786462003", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Ashagi/Ashvx | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- cord
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-cord_100
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cord
type: cord
args: cord
metrics:
- name: Precision
type: precision
value: 0.9174649963154016
- name: Recall
type: recall
value: 0.9318862275449101
- name: F1
type: f1
value: 0.9246193835870776
- name: Accuracy
type: accuracy
value: 0.9405772495755518
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-cord_100
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2834
- Precision: 0.9175
- Recall: 0.9319
- F1: 0.9246
- Accuracy: 0.9406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 4.17 | 250 | 1.0175 | 0.7358 | 0.7882 | 0.7611 | 0.8014 |
| 1.406 | 8.33 | 500 | 0.5646 | 0.8444 | 0.8735 | 0.8587 | 0.8671 |
| 1.406 | 12.5 | 750 | 0.3943 | 0.8950 | 0.9184 | 0.9065 | 0.9189 |
| 0.3467 | 16.67 | 1000 | 0.3379 | 0.9138 | 0.9289 | 0.9213 | 0.9291 |
| 0.3467 | 20.83 | 1250 | 0.2842 | 0.9189 | 0.9334 | 0.9261 | 0.9419 |
| 0.1484 | 25.0 | 1500 | 0.2822 | 0.9233 | 0.9371 | 0.9302 | 0.9427 |
| 0.1484 | 29.17 | 1750 | 0.2906 | 0.9168 | 0.9319 | 0.9243 | 0.9372 |
| 0.0825 | 33.33 | 2000 | 0.2922 | 0.9183 | 0.9334 | 0.9258 | 0.9410 |
| 0.0825 | 37.5 | 2250 | 0.2842 | 0.9154 | 0.9319 | 0.9236 | 0.9397 |
| 0.0596 | 41.67 | 2500 | 0.2834 | 0.9175 | 0.9319 | 0.9246 | 0.9406 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AshtonBenson/DialoGPT-small-quentin-coldwater | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- davanstrien/autotrain-data-genre
co2_eq_emissions:
emissions: 0.9208696533455494
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1787562029
- CO2 Emissions (in grams): 0.9209
## Validation Metrics
- Loss: 0.288
- Accuracy: 0.891
- Macro F1: 0.847
- Micro F1: 0.891
- Weighted F1: 0.888
- Macro Precision: 0.868
- Micro Precision: 0.891
- Weighted Precision: 0.888
- Macro Recall: 0.831
- Micro Recall: 0.891
- Weighted Recall: 0.891
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-genre-1787562029
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("davanstrien/autotrain-genre-1787562029", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-genre-1787562029", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Aspect11/DialoGPT-Medium-LiSBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- davanstrien/autotrain-data-genre
co2_eq_emissions:
emissions: 1.0552169006255405
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1787562032
- CO2 Emissions (in grams): 1.0552
## Validation Metrics
- Loss: 0.264
- Accuracy: 0.899
- Macro F1: 0.851
- Micro F1: 0.899
- Weighted F1: 0.893
- Macro Precision: 0.907
- Micro Precision: 0.899
- Weighted Precision: 0.901
- Macro Recall: 0.818
- Micro Recall: 0.899
- Weighted Recall: 0.899
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-genre-1787562032
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("davanstrien/autotrain-genre-1787562032", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-genre-1787562032", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
At3ee/wav2vec2-base-timit-demo-colab | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- davanstrien/autotrain-data-genre
co2_eq_emissions:
emissions: 0.5720826917621539
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1787562036
- CO2 Emissions (in grams): 0.5721
## Validation Metrics
- Loss: 0.283
- Accuracy: 0.891
- Macro F1: 0.843
- Micro F1: 0.891
- Weighted F1: 0.886
- Macro Precision: 0.877
- Micro Precision: 0.891
- Weighted Precision: 0.888
- Macro Recall: 0.820
- Micro Recall: 0.891
- Weighted Recall: 0.891
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-genre-1787562036
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("davanstrien/autotrain-genre-1787562036", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-genre-1787562036", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Atarax/rick | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- davanstrien/autotrain-data-genre
co2_eq_emissions:
emissions: 0.4153486253352739
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1787562037
- CO2 Emissions (in grams): 0.4153
## Validation Metrics
- Loss: 0.288
- Accuracy: 0.879
- Macro F1: 0.830
- Micro F1: 0.879
- Weighted F1: 0.876
- Macro Precision: 0.853
- Micro Precision: 0.879
- Weighted Precision: 0.876
- Macro Recall: 0.812
- Micro Recall: 0.879
- Weighted Recall: 0.879
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-genre-1787562037
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("davanstrien/autotrain-genre-1787562037", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-genre-1787562037", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Atchuth/DialoGPT-small-MichaelBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- davanstrien/autotrain-data-genre
co2_eq_emissions:
emissions: 3.383419482870438
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1787562034
- CO2 Emissions (in grams): 3.3834
## Validation Metrics
- Loss: 0.273
- Accuracy: 0.902
- Macro F1: 0.857
- Micro F1: 0.902
- Weighted F1: 0.897
- Macro Precision: 0.904
- Micro Precision: 0.902
- Weighted Precision: 0.903
- Macro Recall: 0.828
- Micro Recall: 0.902
- Weighted Recall: 0.902
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-genre-1787562034
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("davanstrien/autotrain-genre-1787562034", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-genre-1787562034", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Ateeb/EmotionDetector | [
"pytorch",
"funnel",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"FunnelForSequenceClassification"
],
"model_type": "funnel",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 32 | null | ---
language:
- ru
tags:
- PyTorch
- GAN
- Handwritten
datasets:
- "sberbank-ai/Peter"
license: mit
---
This is a weights storage for models trained by [ScrabbleGAN](https://github.com/ai-forever/ScrabbleGAN) |
Ateeb/asd | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language: ru
license: unlicense
widget:
- source_sentence: "Кошка ловит мышку."
sentences: ["Кто ловит мышку?", "Где живет кошка?", "Как мышку зовут?"]
---
# SBERT_PQ
Это [sentence-transformers](https://www.SBERT.net) модель, предназначенная
для определения релевантности короткого текста (преимущественно одно предложение длиной до 10-15 слов) и вопроса.
Модель вычисляет для текста и вопроса векторы размерностью 312. Косинус угла между этими векторами
дает оценку того, содержит ли текст ответ на заданный вопрос. В [проекте диалоговой системы](https://github.com/Koziev/chatbot)
она используется для семантического поиска записей в базе фактов по заданному собеседником вопросу.
# Скорость и точность
Модель основана на [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2).
Она имеет очень небольшой размер и быстро выполняет инференс даже на CPU.
Максимальное значение метрики cossim_f1 на тестовой выборке (10% датасета) равно **0.986**.
При использовании модели sberbank-ai/ruBert-base в качестве базовой, максимум cossim_f1 составляет **0.992**.
## Использование с библиотекой (Sentence-Transformers)
Необходимо установить [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
Чтобы определить релевантность в одной паре "текст-вопрос", можно использовать такой код:
```
import sentence_transformers
sentences = ["Кошка ловит мышку.", "Чем занята кошка?"]
model = sentence_transformers.SentenceTransformer('inkoziev/sbert_pq')
embeddings = model.encode(sentences)
s = sentence_transformers.util.cos_sim(a=embeddings[0], b=embeddings[1])
print('text={} question={} cossim={}'.format(sentences[0], sentences[1], s))
```
## Контакты и цитирование
```
@MISC{rugpt_chitchat,
author = {Ilya Koziev},
title = {Texts & Questions Relevancy Model},
url = {https://huggingface.co/inkoziev/sbert_pq},
year = 2022
}
```
|
Augustvember/WokkaBot | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-171022-update_label2
co2_eq_emissions:
emissions: 19.661735872263936
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1788462049
- CO2 Emissions (in grams): 19.6617
## Validation Metrics
- Loss: 0.031
- Accuracy: 0.991
- Precision: 0.755
- Recall: 0.812
- F1: 0.783
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-171022-update_label2-1788462049
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-171022-update_label2-1788462049", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-171022-update_label2-1788462049", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Augustvember/WokkaBot7 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
---
### logo with face on shield on Stable Diffusion
This is the `<logo-huizhang>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:







|
Augustvember/WokkaBot9 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
datasets:
- squad
model:
- facebook/data2vec-text-base
---
<h1>data2vec squad</h1>
This is a testing fine tuned data2vec model in the squad dataset, any improvements and suggestions are welcome!
<h3>Intended use</h3>
Question Answering
<h3>Training results</h3>
<table>
<thead>
<tr>
<th>Epoch</th>
<th>Training Loss</th>
<th>Validation Loss</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td><span style="font-family: Roboto, Noto, sans-serif; font-size: 14px; font-style: normal; font-weight: 400; text-align: right;">1.015800</span><br></td>
<td><span style="font-family: Roboto, Noto, sans-serif; font-size: 14px; font-style: normal; font-weight: 400; text-align: right;">0.997690</span><br></td>
</tr>
<tr>
<td>2</td>
<td><span style="font-family: Roboto, Noto, sans-serif; font-size: 14px; font-style: normal; font-weight: 400; text-align: right;">0.804400</span></td>
<td><span style="font-family: Roboto, Noto, sans-serif; font-size: 14px; font-style: normal; font-weight: 400; text-align: right;">0.950322</span><br></td>
</tr>
</tbody>
</table>
<h3>Hyperparameters</h3>
<ul>
<li>evaluation_strategy="epoch"</li>
<li>learning_rate=2e-5</li>
<li>per_device_train_batch_size=15</li>
<li>per_device_eval_batch_size=15</li>
<li>num_train_epochs=2</li>
<li>weight_decay=0.01</li>
</ul>
<h3>Frameworks and libraries used:</h3>
<ul>
<li>transformers</li>
<li>datasets</li>
<li>evaluate</li>
</ul> |
Augustvember/WokkaBotF | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ## 1 - 配置环境
### 1.0 测试显卡
!nvidia-smi -L
### 1.1 下载安装依赖
setup miniconda
import sys
!wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
!chmod +x Miniconda3-latest-Linux-x86_64.sh
!bash ./Miniconda3-latest-Linux-x86_64.sh -b -f -p /usr/local
sys.path.append('/usr/local/lib/python3.7/site-packages/')
!rm Miniconda3-latest-Linux-x86_64.sh
### 1.2 设置环境
Setup environment, Gfpgan and Real-ESRGAN. Takes about 5-6 minutes
#@markdown ### Set up conda environment - Takes a while
!conda env update -n base -f /content/stable-diffusion/environment.yaml
### 1.3 设置CFPGan和ESRGAN
#@markdown ### Build upscalers support
#@markdown **GFPGAN** Automatically correct distorted faces with a built-in GFPGAN option, fixes them in less than half a second
#@markdown **ESRGAN** Boosts the resolution of images with a built-in RealESRGAN option
#@markdown LDSR and GoBig enable amazing upscale options in the new Image Lab
add_CFP = True #@param {type:"boolean"}
add_ESR = True #@param {type:"boolean"}
add_LDSR = False #@param {type:"boolean"}
#@markdown ⚠️ LDSR is 1.9GB and make take time to download
if add_CFP:
%cd /content/stable-diffusion/src/gfpgan/
!pip install basicsr facexlib yapf lmdb opencv-python pyyaml tb-nightly --no-deps
!python setup.py develop
!pip install realesrgan
!wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth -P experiments/pretrained_models
if add_ESR:
%cd /content/stable-diffusion/src/realesrgan/
!wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models
!wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth -P experiments/pretrained_models
if add_LDSR:
%cd /content/stable-diffusion/src
!git clone https://github.com/devilismyfriend/latent-diffusion
%cd latent-diffusion
%mkdir -p experiments/
%cd experiments/
%mkdir -p pretrained_models
%cd pretrained_models
#project.yaml download
!wget -O project.yaml https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1
#model.ckpt model download
!wget -O model.ckpt https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1
%cd /content/stable-diffusion/
!wget https://github.com/matomo-org/travis-scripts/blob/master/fonts/Arial.ttf?raw=true -O arial.ttf
#2.配置NovelAI
**可以展开配置密码**,否则自动随机生成一个
每次更改需要运行一席下面单元格代码
## 下载复制文件
最快也得4分钟,稍等
如果执行失败,重新执行第二步和第三步即可
!sudo apt-get install aria2
!sudo apt-get install file
!mkdir /content/time
!git clone https://github.com/pnpnpn/timeout-decorator.git /content/time
%cd /content/time
!pwd
!ls -l
# 下载NA
%cd /content/time
import timeout_decorator
outTime=180
@timeout_decorator.timeout(outTime)
def downNovelAI():
!rm -rf /content/n2
!mkdir /content/n2
%cd /content/n2
!aria2c "magnet:?xt=urn:btih:4a4b483d4a5840b6e1fee6b0ca1582c979434e4d&dn=naifu&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce"
def checkFile():
!file /content/n2/naifu/models/animefull-final-pruned/model.ckpt>fileinfo
!file /content/n2/naifu/models/animevae.pt>fileinfo2
f1=open("fileinfo")
res1=f1.read()
f1.close
f2=open("fileinfo2")
res2=f2.read()
f2.close
return "Zip" in res1 and "Zip" in res2
while 1:
try:
downNovelAI()
except:
if checkFile():
print("下载完成")
outTime+=60
break
else:
print("下载未完成,自动重试")
# 下载WebUI
!mkdir /content/novelai
%cd /content/novelai
!git clone https://github.com/RyensX/stable-diffusion-webui-zh /content/novelai
%cd /content/novelai
!git checkout -b master
# 复制模型
!cp /content/n2/naifu/models/animefull-final-pruned/model.ckpt /content/novelai/models/Stable-diffusion/
!cp /content/n2/naifu/models/animevae.pt /content/novelai/models/Stable-diffusion/model.pt
!mkdir -p /content/novelai/train_images/raw/
!mkdir -p /content/novelai/train_images/des/
## 设置密码
若不设置则随机生成一个
每次更改需要运行一下下面单元格代码
import random
keys="abcdefghigklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
#@markdown # 服务账号
user="Iris" #@param {type:"string"}
if len(user)==0:
user="".join([random.choice(keys) for i in range(random.randint(4,6))])
#@markdown # 服务密码
pwd="212121" #@param {type:"string"}
if len(pwd)==0:
pwd="".join([random.choice(keys) for i in range(random.randint(6,8))])
#3.运行NovelAI
* 运行成功时会显示两个蓝色的地址
* 点击**类似** ~https://xxxx.gradio.app/~ 的网址即可外部访问,支持分享给别人用
* 有时候运行成功但是没给出链接可能是因为太多人在生成链接了,**重新运行一下**这一步试试
* 有时候生成图片进度条都没动就直接出图而且界面一直没有重新出来gen也是因为太多人用,刷新一下就好
**可主动停止和多次运行下列单元格代码**控制NovelAI运行状态
%cd /content/novelai
print("#####################################################################################################################")
print(f"* 账号密码分别是{user}和{pwd}")
print("#######################################")
print("!!!运行成功时会显示两个蓝色的地址,点击下方类似 https://xxxx.gradio.app/ 的网址即可外部访问,支持分享给别人用")
print("!!!注意看上面文本提示")
print("#####################################################################################################################")
!python launch.py --share --gradio-auth {user}:{pwd} --deepdanbooru |
Ayham/albert_gpt2_Full_summarization_cnndm | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: cc-by-nc-4.0
---
A simple height weight model |
Ayham/albert_gpt2_summarization_xsum | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: mit
---
### zero on Stable Diffusion
This is the `<zero>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
Ayham/bert_gpt2_summarization_xsum | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- pachi107/autotrain-data-ethos-sentiments
co2_eq_emissions:
emissions: 1.1703390276575862
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1790262080
- CO2 Emissions (in grams): 1.1703
## Validation Metrics
- Loss: 0.469
- Accuracy: 0.830
- Precision: 0.856
- Recall: 0.841
- AUC: 0.898
- F1: 0.848
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/pachi107/autotrain-ethos-sentiments-1790262080
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("pachi107/autotrain-ethos-sentiments-1790262080", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("pachi107/autotrain-ethos-sentiments-1790262080", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Ayham/bertgpt2_cnn | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- pachi107/autotrain-data-ethos-sentiments
co2_eq_emissions:
emissions: 0.8181506582658064
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1790262082
- CO2 Emissions (in grams): 0.8182
## Validation Metrics
- Loss: 0.565
- Accuracy: 0.775
- Precision: 0.783
- Recall: 0.832
- AUC: 0.823
- F1: 0.807
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/pachi107/autotrain-ethos-sentiments-1790262082
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("pachi107/autotrain-ethos-sentiments-1790262082", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("pachi107/autotrain-ethos-sentiments-1790262082", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Ayham/distilbert_distilgpt2_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Ayham/distilbert_gpt2_summarization_xsum | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: damilare-akin/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Ayham/roberta_roberta_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: nyaszzzz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nyaszzzz
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6801 | 0.5 | 1384 | 1.4490 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Ayham/xlmroberta_large_gpt2_summarization_cnndm | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
license: mit
---
### Willy-HD on Stable Diffusion
This is the `<willy_character>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
Ayham/xlnet_distilgpt2_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: apache-2.0
---
Model card for the Question Answering component (component 2) of the Discord Questions paper (EMNLP 2022 - Findings). The model is a finetuned RoBERTa-large. Example usage coming soon. |
Ayham/xlnet_roberta_new_summarization_cnn_dailymail | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_base_uncased_fine_tuned_sent140
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_uncased_fine_tuned_sent140
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9132
- Accuracy: 0.7914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 408 | 0.7043 | 0.7406 |
| 0.7838 | 2.0 | 816 | 0.7407 | 0.7727 |
| 0.4194 | 3.0 | 1224 | 0.9132 | 0.7914 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Ayoola/pytorch_model | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
widget:
- text: "2021\n\n"
---
Full code and details at https://github.com/csinva/gpt-paper-title-generator
**Model**
- finetunes starting from the[gpt-neo-2.7B checkpoint](https://huggingface.co/EleutherAI/gpt-neo-2.7B)
- for training details see [the training script](https://github.com/csinva/gpt-paper-title-generator/blob/0157f26be9b0763b4ea6480e5b149fdb8dff4626/gptneo/02_finetune_hf.py)
- inference
- should prepend with a year and two newlines before querying for a title, e.g. `2022\n\n`
```python
from transformers import AutoModelForCausalLM, pipeline, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("csinva/gpt-neo-2.7B-titles")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B")
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer)
pipe('2022\n\n')
```
**Data**
- all [papers on arXiv](https://www.kaggle.com/datasets/Cornell-University/arxiv) in the categories cs.AI, cs.LG, stat.ML
- date cutoff: only finetuned on papers with dat on or before Apr 1, 2022
- random 5% of papers also excluded
- this results in 98,388 papers for finetuning
- during finetuning each paper title was given starting with the prompt `<year>\n\n <title>\n` (e.g. `2022\n\n Emb-GAM: an Interpretable and Efficient Predictor using Pre-trained Language Models\n`) |
Ayran/DialoGPT-medium-harry-potter-1-through-3 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | 2022-10-17T18:57:08Z | ---
license: mit
---
### Sezz on Stable Diffusion via Dreambooth
#### model by estealbertosanz
This your the Stable Diffusion model fine-tuned the Sezz concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a real photo of sezz**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:



















|
Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 148.00 +/- 47.52
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
To learn to code your own PPO agent and train it Unit 8 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit8
# Hyperparameters
```python
{'exp_name': '__file__'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'f': '/root/.local/share/jupyter/runtime/kernel-9c96fe8c-041c-4681-aa25-a76703c94d0d.json'
'repo_id': 'heriosousa/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
AyushPJ/ai-club-inductions-21-nlp-XLNet | [
"pytorch",
"xlnet",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"XLNetForQuestionAnsweringSimple"
],
"model_type": "xlnet",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 250
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-distilbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5942
- eval_accuracy: 0.9189
- eval_f1: 0.9154
- eval_runtime: 114.314
- eval_samples_per_second: 69.983
- eval_steps_per_second: 8.748
- epoch: 9.0
- step: 36000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
AyushPJ/test-squad-trained-finetuned-squad | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: robbert_base_fine_tuned_sent140
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robbert_base_fine_tuned_sent140
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9218
- Accuracy: 0.7433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 408 | 0.8129 | 0.7246 |
| 0.9065 | 2.0 | 816 | 0.7640 | 0.7273 |
| 0.5407 | 3.0 | 1224 | 0.9218 | 0.7433 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Azaghast/DistilBERT-SCP-Class-Classification | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 42 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: dead
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dead
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6198
- Train End Logits Accuracy: 0.5843
- Train Start Logits Accuracy: 0.5459
- Validation Loss: 1.2514
- Validation End Logits Accuracy: 0.6603
- Validation Start Logits Accuracy: 0.6255
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2766, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.6198 | 0.5843 | 0.5459 | 1.2514 | 0.6603 | 0.6255 | 0 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BeIR/query-gen-msmarco-t5-large-v1 | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 1,225 | null | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-1.4B-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-1.4B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1.4B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1.4B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1.4B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1.4B-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1.4B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1.4B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-1.4B-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> |
Bhumika/roberta-base-finetuned-sst2 | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 85 | null | ---
license: mit
---
### youpi2 on Stable Diffusion
This is the `<youpi>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
Biasface/DDDC | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
license: apache-2.0
---
https://github.com/S-T-Full-Text-Knowledge-Mining/CssBERT |
BigSalmon/BestMask2 | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: lmv2-g-rai_1-995-doc-10-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmv2-g-rai_1-995-doc-10-18
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0163
- Dob Key Precision: 0.7638
- Dob Key Recall: 0.7638
- Dob Key F1: 0.7638
- Dob Key Number: 127
- Dob Value Precision: 0.9767
- Dob Value Recall: 0.9767
- Dob Value F1: 0.9767
- Dob Value Number: 129
- Doctor Name Key Precision: 0.6970
- Doctor Name Key Recall: 0.6866
- Doctor Name Key F1: 0.6917
- Doctor Name Key Number: 67
- Doctor Name Value Precision: 0.9275
- Doctor Name Value Recall: 0.9143
- Doctor Name Value F1: 0.9209
- Doctor Name Value Number: 70
- Patient Name Key Precision: 0.7055
- Patient Name Key Recall: 0.7357
- Patient Name Key F1: 0.7203
- Patient Name Key Number: 140
- Patient Name Value Precision: 0.9724
- Patient Name Value Recall: 0.9792
- Patient Name Value F1: 0.9758
- Patient Name Value Number: 144
- Overall Precision: 0.8460
- Overall Recall: 0.8523
- Overall F1: 0.8492
- Overall Accuracy: 0.9958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Dob Key Precision | Dob Key Recall | Dob Key F1 | Dob Key Number | Dob Value Precision | Dob Value Recall | Dob Value F1 | Dob Value Number | Doctor Name Key Precision | Doctor Name Key Recall | Doctor Name Key F1 | Doctor Name Key Number | Doctor Name Value Precision | Doctor Name Value Recall | Doctor Name Value F1 | Doctor Name Value Number | Patient Name Key Precision | Patient Name Key Recall | Patient Name Key F1 | Patient Name Key Number | Patient Name Value Precision | Patient Name Value Recall | Patient Name Value F1 | Patient Name Value Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-------------------------:|:----------------------:|:------------------:|:----------------------:|:---------------------------:|:------------------------:|:--------------------:|:------------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:-------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.5034 | 1.0 | 796 | 0.0841 | 0.7143 | 0.7480 | 0.7308 | 127 | 0.7881 | 0.9225 | 0.85 | 129 | 0.0 | 0.0 | 0.0 | 67 | 0.0 | 0.0 | 0.0 | 70 | 0.5988 | 0.7143 | 0.6515 | 140 | 0.4908 | 0.9236 | 0.6410 | 144 | 0.5944 | 0.6603 | 0.6256 | 0.9887 |
| 0.0579 | 2.0 | 1592 | 0.0365 | 0.7231 | 0.7402 | 0.7315 | 127 | 0.9766 | 0.9690 | 0.9728 | 129 | 0.6462 | 0.6269 | 0.6364 | 67 | 0.9296 | 0.9429 | 0.9362 | 70 | 0.7103 | 0.7357 | 0.7228 | 140 | 0.9392 | 0.9653 | 0.9521 | 144 | 0.8282 | 0.8405 | 0.8343 | 0.9954 |
| 0.0317 | 3.0 | 2388 | 0.0297 | 0.7578 | 0.7638 | 0.7608 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.7077 | 0.6866 | 0.6970 | 67 | 0.8676 | 0.8429 | 0.8551 | 70 | 0.6474 | 0.7214 | 0.6824 | 140 | 0.8993 | 0.9306 | 0.9147 | 144 | 0.8101 | 0.8316 | 0.8207 | 0.9943 |
| 0.0233 | 4.0 | 3184 | 0.0195 | 0.7638 | 0.7638 | 0.7638 | 127 | 0.9403 | 0.9767 | 0.9582 | 129 | 0.7015 | 0.7015 | 0.7015 | 67 | 0.9718 | 0.9857 | 0.9787 | 70 | 0.6164 | 0.7 | 0.6555 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.8222 | 0.8538 | 0.8377 | 0.9958 |
| 0.0189 | 5.0 | 3980 | 0.0188 | 0.7462 | 0.7638 | 0.7549 | 127 | 0.9545 | 0.9767 | 0.9655 | 129 | 0.5606 | 0.5522 | 0.5564 | 67 | 0.9565 | 0.9429 | 0.9496 | 70 | 0.6228 | 0.7429 | 0.6775 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.8054 | 0.8434 | 0.8240 | 0.9955 |
| 0.0174 | 6.0 | 4776 | 0.0167 | 0.7638 | 0.7638 | 0.7638 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.5970 | 0.5970 | 0.5970 | 67 | 0.9714 | 0.9714 | 0.9714 | 70 | 0.6478 | 0.7357 | 0.6890 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.8250 | 0.8493 | 0.8370 | 0.9956 |
| 0.0162 | 7.0 | 5572 | 0.0185 | 0.7578 | 0.7638 | 0.7608 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.4272 | 0.6567 | 0.5176 | 67 | 0.9677 | 0.8571 | 0.9091 | 70 | 0.7007 | 0.7357 | 0.7178 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.7997 | 0.8434 | 0.8210 | 0.9954 |
| 0.0153 | 8.0 | 6368 | 0.0170 | 0.7638 | 0.7638 | 0.7638 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.5758 | 0.5672 | 0.5714 | 67 | 0.9571 | 0.9571 | 0.9571 | 70 | 0.7305 | 0.7357 | 0.7331 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.8437 | 0.8449 | 0.8443 | 0.9957 |
| 0.0142 | 9.0 | 7164 | 0.0163 | 0.7638 | 0.7638 | 0.7638 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.6970 | 0.6866 | 0.6917 | 67 | 0.9275 | 0.9143 | 0.9209 | 70 | 0.7055 | 0.7357 | 0.7203 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.8460 | 0.8523 | 0.8492 | 0.9958 |
| 0.0136 | 10.0 | 7960 | 0.0177 | 0.7405 | 0.7638 | 0.7519 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.6094 | 0.5821 | 0.5954 | 67 | 0.8358 | 0.8 | 0.8175 | 70 | 0.6541 | 0.7429 | 0.6957 | 140 | 0.9589 | 0.9722 | 0.9655 | 144 | 0.8075 | 0.8301 | 0.8186 | 0.9953 |
| 0.0131 | 11.0 | 8756 | 0.0202 | 0.7402 | 0.7402 | 0.7402 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.5968 | 0.5522 | 0.5736 | 67 | 0.9403 | 0.9 | 0.9197 | 70 | 0.7305 | 0.7357 | 0.7331 | 140 | 0.9655 | 0.9722 | 0.9689 | 144 | 0.8390 | 0.8316 | 0.8353 | 0.9954 |
| 0.0134 | 12.0 | 9552 | 0.0195 | 0.7239 | 0.7638 | 0.7433 | 127 | 0.9237 | 0.8450 | 0.8826 | 129 | 0.5846 | 0.5672 | 0.5758 | 67 | 0.9041 | 0.9429 | 0.9231 | 70 | 0.7305 | 0.7357 | 0.7331 | 140 | 0.9722 | 0.9722 | 0.9722 | 144 | 0.8193 | 0.8168 | 0.8180 | 0.9949 |
| 0.0127 | 13.0 | 10348 | 0.0169 | 0.7638 | 0.7638 | 0.7638 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.7077 | 0.6866 | 0.6970 | 67 | 0.9403 | 0.9 | 0.9197 | 70 | 0.6211 | 0.7143 | 0.6645 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.8256 | 0.8464 | 0.8359 | 0.9957 |
| 0.0119 | 14.0 | 11144 | 0.0174 | 0.7638 | 0.7638 | 0.7638 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.5821 | 0.5821 | 0.5821 | 67 | 0.9437 | 0.9571 | 0.9504 | 70 | 0.6897 | 0.7143 | 0.7018 | 140 | 0.9338 | 0.9792 | 0.9559 | 144 | 0.8261 | 0.8419 | 0.8339 | 0.9955 |
| 0.013 | 15.0 | 11940 | 0.0174 | 0.6953 | 0.7008 | 0.6980 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.6164 | 0.6716 | 0.6429 | 67 | 0.9706 | 0.9429 | 0.9565 | 70 | 0.6667 | 0.7143 | 0.6897 | 140 | 0.9583 | 0.9583 | 0.9583 | 144 | 0.8150 | 0.8331 | 0.8240 | 0.9950 |
| 0.0133 | 16.0 | 12736 | 0.0195 | 0.7008 | 0.7008 | 0.7008 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.5823 | 0.6866 | 0.6301 | 67 | 0.9054 | 0.9571 | 0.9306 | 70 | 0.6174 | 0.6571 | 0.6367 | 140 | 0.9161 | 0.9097 | 0.9129 | 144 | 0.7860 | 0.8139 | 0.7997 | 0.9946 |
| 0.0154 | 17.0 | 13532 | 0.0239 | 0.6885 | 0.6614 | 0.6747 | 127 | 0.8623 | 0.9225 | 0.8914 | 129 | 0.5057 | 0.6567 | 0.5714 | 67 | 0.9403 | 0.9 | 0.9197 | 70 | 0.3727 | 0.5857 | 0.4556 | 140 | 0.9655 | 0.9722 | 0.9689 | 144 | 0.6829 | 0.7858 | 0.7308 | 0.9935 |
| 0.0163 | 18.0 | 14328 | 0.0437 | 0.6607 | 0.5827 | 0.6192 | 127 | 0.5736 | 0.8760 | 0.6933 | 129 | 0.4177 | 0.4925 | 0.4521 | 67 | 0.8243 | 0.8714 | 0.8472 | 70 | 0.4845 | 0.5571 | 0.5183 | 140 | 0.5990 | 0.7986 | 0.6845 | 144 | 0.5816 | 0.7001 | 0.6354 | 0.9887 |
| 0.0109 | 19.0 | 15124 | 0.0220 | 0.7578 | 0.7638 | 0.7608 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.7097 | 0.6567 | 0.6822 | 67 | 0.9403 | 0.9 | 0.9197 | 70 | 0.6776 | 0.7357 | 0.7055 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.8404 | 0.8479 | 0.8441 | 0.9955 |
| 0.0104 | 20.0 | 15920 | 0.0184 | 0.6093 | 0.7244 | 0.6619 | 127 | 0.976 | 0.9457 | 0.9606 | 129 | 0.6133 | 0.6866 | 0.6479 | 67 | 0.9437 | 0.9571 | 0.9504 | 70 | 0.6013 | 0.6571 | 0.6280 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.7778 | 0.8272 | 0.8017 | 0.9950 |
| 0.0086 | 21.0 | 16716 | 0.0232 | 0.3889 | 0.4409 | 0.4133 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.5270 | 0.5821 | 0.5532 | 67 | 0.9444 | 0.9714 | 0.9577 | 70 | 0.5245 | 0.5357 | 0.5300 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.7143 | 0.7459 | 0.7298 | 0.9930 |
| 0.0085 | 22.0 | 17512 | 0.0197 | 0.7480 | 0.7480 | 0.7480 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.6471 | 0.6567 | 0.6519 | 67 | 0.9189 | 0.9714 | 0.9444 | 70 | 0.6149 | 0.65 | 0.6319 | 140 | 0.9658 | 0.9792 | 0.9724 | 144 | 0.8165 | 0.8346 | 0.8254 | 0.9951 |
| 0.0083 | 23.0 | 18308 | 0.0220 | 0.7328 | 0.7559 | 0.7442 | 127 | 0.9692 | 0.9767 | 0.9730 | 129 | 0.6081 | 0.6716 | 0.6383 | 67 | 0.9571 | 0.9571 | 0.9571 | 70 | 0.6479 | 0.6571 | 0.6525 | 140 | 0.9592 | 0.9792 | 0.9691 | 144 | 0.8170 | 0.8375 | 0.8271 | 0.9952 |
| 0.0084 | 24.0 | 19104 | 0.0226 | 0.6418 | 0.6772 | 0.6590 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.5 | 0.7164 | 0.5890 | 67 | 0.8919 | 0.9429 | 0.9167 | 70 | 0.5034 | 0.5286 | 0.5157 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.7462 | 0.7991 | 0.7718 | 0.9942 |
| 0.0067 | 25.0 | 19900 | 0.0257 | 0.6691 | 0.7165 | 0.6920 | 127 | 0.9692 | 0.9767 | 0.9730 | 129 | 0.6267 | 0.7015 | 0.6620 | 67 | 0.9143 | 0.9143 | 0.9143 | 70 | 0.6828 | 0.7071 | 0.6947 | 140 | 0.94 | 0.9792 | 0.9592 | 144 | 0.8045 | 0.8390 | 0.8214 | 0.9949 |
| 0.0071 | 26.0 | 20696 | 0.0241 | 0.5828 | 0.6929 | 0.6331 | 127 | 0.9692 | 0.9767 | 0.9730 | 129 | 0.6029 | 0.6119 | 0.6074 | 67 | 0.8889 | 0.9143 | 0.9014 | 70 | 0.5563 | 0.5643 | 0.5603 | 140 | 0.9658 | 0.9792 | 0.9724 | 144 | 0.7602 | 0.7962 | 0.7778 | 0.9943 |
| 0.0072 | 27.0 | 21492 | 0.0222 | 0.6850 | 0.6850 | 0.6850 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.5714 | 0.6567 | 0.6111 | 67 | 0.9178 | 0.9571 | 0.9371 | 70 | 0.6370 | 0.6643 | 0.6503 | 140 | 0.9592 | 0.9792 | 0.9691 | 144 | 0.7983 | 0.8242 | 0.8110 | 0.9948 |
| 0.0057 | 28.0 | 22288 | 0.0259 | 0.5909 | 0.6142 | 0.6023 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.6714 | 0.7015 | 0.6861 | 67 | 0.9275 | 0.9143 | 0.9209 | 70 | 0.5734 | 0.5857 | 0.5795 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.7820 | 0.7947 | 0.7883 | 0.9943 |
| 0.0054 | 29.0 | 23084 | 0.0299 | 0.6418 | 0.6772 | 0.6590 | 127 | 0.9618 | 0.9767 | 0.9692 | 129 | 0.6216 | 0.6866 | 0.6525 | 67 | 0.8873 | 0.9 | 0.8936 | 70 | 0.5306 | 0.5571 | 0.5436 | 140 | 0.9655 | 0.9722 | 0.9689 | 144 | 0.7678 | 0.7962 | 0.7817 | 0.9937 |
| 0.0066 | 30.0 | 23880 | 0.0254 | 0.5532 | 0.6142 | 0.5821 | 127 | 0.9259 | 0.9690 | 0.9470 | 129 | 0.5938 | 0.5672 | 0.5802 | 67 | 0.9130 | 0.9 | 0.9065 | 70 | 0.6738 | 0.6786 | 0.6762 | 140 | 0.9592 | 0.9792 | 0.9691 | 144 | 0.7747 | 0.7976 | 0.7860 | 0.9943 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.2.2
- Tokenizers 0.13.1
|
BigSalmon/FormalBerta2 | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | null | ---
license: apache-2.0
---
https://github.com/S-T-Full-Text-Knowledge-Mining/CssBERT |
BigSalmon/FormalBerta3 | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: train
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
BigSalmon/GPT2HardArticleEasyArticle | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-10-18T06:38:04Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- micole66/autotrain-data-animals
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.6998538355363139
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1797562141
- CO2 Emissions (in grams): 0.6999
## Validation Metrics
- Loss: 0.096
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000 |
BigSalmon/GPT2HardandEasy | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | pip install diffusers transformers nvidia-ml-py3 ftfy pytorch pillow
|
BigSalmon/GPTNeo350MInformalToFormalLincoln6 | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
]
| text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
license: mit
---
### Beholder on Stable Diffusion
This is the `<beholder>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
BigSalmon/GPTT | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9236875354311616
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2154
- Accuracy: 0.9235
- F1: 0.9237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.773 | 1.0 | 250 | 0.2981 | 0.9065 | 0.9037 |
| 0.2415 | 2.0 | 500 | 0.2154 | 0.9235 | 0.9237 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
BigSalmon/MrLincoln12 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.07 +/- 25.56
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BigSalmon/MrLincoln125MNeo | [
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
license: mit
---
Based off google/mt5-base and trained on [DGT-TM](https://www.kaggle.com/datasets/hgultekin/paralel-translation-corpus-in-22-languages) |
BigSalmon/MrLincoln13 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8655737704918034
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3194
- Accuracy: 0.8633
- F1: 0.8656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
BigSalmon/MrLincoln14 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
---
### Progress Chip on Stable Diffusion
This is the `<progress-chip>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
BigSalmon/MrLincoln2 | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: giusepperusso/distilbert-base-uncased-finetuned-The_Donald
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# giusepperusso/distilbert-base-uncased-finetuned-The_Donald
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.7889
- Validation Loss: 2.5521
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 30250, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.7889 | 2.5521 | 0 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
BigSalmon/MrLincoln3 | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 17 | null | ---
license: apache-2.0
pipeline_tag: text-generation
widget:
- text: "1.1.1.21<sep><start>"
inference:
parameters:
top_k: 9
repetition_penalty: 1.2
---
# **ZymCTRL**
ZymCTRL (Enzyme Control) ([Paper presented @ Machine Learning for Structural Biology workshop - December 2022](https://www.mlsb.io/papers_2022/ZymCTRL_a_conditional_language_model_for_the_controllable_generation_of_artificial_enzymes.pdf))
is a conditional language model for the generation of artificial functional enzymes. It was trained on the entire BRENDA database of enzymes, comprising over 37 M sequences.
Given a user-defined Enzymatic Commission (EC) number, the model generates protein sequences that fulfill that catalytic reaction.
The generated sequences are ordered, globular, and distant to natural ones, while their intended catalytic properties match those defined by users.
If you don't know the EC number of your protein of interest, have a look at the BRENDA webpage: https://www.brenda-enzymes.org/ecexplorer.php?browser=1
See below for information about the model, how to generate sequences, and how to save and rank them by perplexity.
## **Model description**
ZymCTRL is based on the [CTRL Transformer](https://arxiv.org/abs/1909.05858) architecture (which in turn is very similar to ChatGPT) and contains 36 layers
with a model dimensionality of 1280, totaling 738 million parameters.
ZymCTRL is a decoder-only transformer model pre-trained on the BRENDA database
(version July 2022). The pre-training was done on the raw sequences without FASTA headers,
with the EC classes prepended to each sequence. The databases will be uploaded soon.
ZymCTRL was trained with an autoregressive objective, i.e., the model learns to predict
the next token given a sequence context. Because the first tokens on each sequence encode the EC numbers,
the model learns the dependencies among EC classes and their corresponding sequences and is able to _speak_ the enzyme language.
There are stark differences in the number of members among EC classes, and for this reason, we also tokenized the EC numbers.
In this manner, EC numbers '2.7.1.1' and '2.7.1.2' share the first three tokens (six, including separators), and hence the model can infer that
there are relationships between the two classes.
The figure below summarizes the process of training:

## **How to use ZymCTRL**
ZymCTRL can be used with the HuggingFace transformer python package.
Detailed installation instructions can be found here: https://huggingface.co/docs/transformers/installation
Since ZymCTRL has been trained on the classical language model objective on enzyme sequences with their EC annotation,
it particularly excels at generating enzyme sequences given a user-defined EC class, such as alcohol dehydrogenases ('1.1.1.2').
The model can generate in two ways: in a zero-shot fashion, i.e., directly generating from the checkpoint weights, or after fine-tuning.
Fine-tuning allows augmenting the BRENDA datasets that were used during training, for example,
if you have a curated internal dataset or a set of ancestrally-reconstructed sequences. This is entirely optional. One advantage of
running the model in zero-shot is that it doesn't require any further training.
### **Example 1: Generating nitrilases (EC 3.5.5.1)**
The script below will be used for the generation of any BRENDA class in a zero-shot fashion,
here we showcase the generation of novel nitrilases.
To run this script, you should download ZymCTRL to a local folder in your workstation.
Then replace the placeholders in the script with your actual folder path.
You can run it directly in the command line (once you have hugging face installed),
with the following command: `python generate.py`
The script will write each sequence in a fasta file in the folder you specify. In the fasta header,
it will store the sequence's computed perplexity value. Perplexity is a measure of the model's confidence
in that generation, with lower values being better. The sequences are ordered by perplexity before writing them out,
so those that finish in *_0.fasta and *_1.fasta will be the best ones per batch.
**Given that generation runs so fast, we recommend generating hundreds or thousands and then only picking the best 5% or less.
With the script below, that would mean picking only those that finish in '_0.fasta'. Good perplexity values for this model so be below 1.75-1.5.**
```
import torch
from transformers import GPT2LMHeadModel, AutoTokenizer
import os
from tqdm import tqdm
import math
def remove_characters(sequence, char_list):
"This function removes special tokens used during training."
columns = sequence.split('<sep>')
seq = columns[1]
for char in char_list:
seq = seq.replace(char, '')
return seq
def calculatePerplexity(input_ids,model,tokenizer):
"This function computes perplexities for the generated sequences"
with torch.no_grad():
outputs = model(input_ids, labels=input_ids)
loss, logits = outputs[:2]
return math.exp(loss)
def main(label, model,special_tokens,device,tokenizer):
# Generating sequences
input_ids = tokenizer.encode(label,return_tensors='pt').to(device)
outputs = model.generate(
input_ids,
top_k=9, #tbd
repetition_penalty=1.2,
max_length=1024,
eos_token_id=1,
pad_token_id=0,
do_sample=True,
num_return_sequences=20) # Depending non your GPU, you'll be able to generate fewer or more sequences. This runs in an A40.
# Check sequence sanity, ensure sequences are not-truncated.
# The model will truncate sequences longer than the specified max_length (1024 above). We want to avoid those sequences.
new_outputs = [ output for output in outputs if output[-1] == 0]
if not new_outputs:
print("not enough sequences with short lengths!!")
# Compute perplexity for every generated sequence in the batch
ppls = [(tokenizer.decode(output), calculatePerplexity(output, model, tokenizer)) for output in new_outputs ]
# Sort the batch by perplexity, the lower the better
ppls.sort(key=lambda i:i[1]) # duplicated sequences?
# Final dictionary with the results
sequences={}
sequences[label] = [(remove_characters(x[0], special_tokens), x[1]) for x in ppls]
return sequences
if __name__=='__main__':
device = torch.device("cuda") # Replace with 'cpu' if you don't have a GPU - but it will be slow
print('Reading pretrained model and tokenizer')
tokenizer = AutoTokenizer.from_pretrained('/path/to/zymCTRL/') # change to ZymCTRL location
model = GPT2LMHeadModel.from_pretrained('/path/to/zymCTRL').to(device) # change to ZymCTRL location
special_tokens = ['<start>', '<end>', '<|endoftext|>','<pad>',' ', '<sep>']
# change to the appropriate BRENDA EC classes
labels=['3.5.5.1'] # nitrilases. You can put as many labels as you want.
for label in tqdm(labels):
# We'll run 100 batches per label. 20 sequences will be generated per batch.
for i in range(0,100):
sequences = main(label, model, special_tokens, device, tokenizer)
for key,value in sequences.items():
for index, val in enumerate(value):
# Sequences will be saved with the name of the label followed by the batch index,
# and the order of the sequence in that batch.
fn = open(f"/path/to/folder/{label}_{i}_{index}.fasta", "w")
fn.write(f'>{label}_{i}_{index}\t{val[1]}\n{val[0]}')
fn.close()
```
## **Example 2: Fine-tuning on a set of user-defined sequences**
This alternative to the zero-shot generation allows updating ZymCTRL's weights to new sequences.
This strategy is not strictly necessary, in fact, we have observed good generations even for EC classes where there are
only 1-2 representatives in Nature. But you might have an internal set of sequences that you'd like to incorporate into the model.
For example, internal datasets after protein engineering efforts,
ancestrally-reconstructed sets, or after searching against metagenomics databases. In these cases, it is advisable to fine-tune ZymCTRL,
as it will learn new properties from your dataset and potentially improve the generation quality
(especially for poorly populated EC classes).
To fine-tune ZymCTRL, you will need to process your sequences quite a bit. The scripts below can exactly do that without any
modifications. The only requisite is to start with an input file, 'sequences.fasta' which contains all the sequences in a fasta format.
We recommend using at least 200 sequences to obtain the best results. But we've seen it working with fewer sequences, so if you don't have
that many, give it still a go.
```
import random
import transformers
from transformers import AutoTokenizer
# 1. Read the source file
with open('sequences.fasta', 'r') as fn:
data = fn.readlines()
fn.close()
# Put sequences into dictionary
sequences={}
for line in data:
if '>' in line:
name = line.strip()
sequences[name] = ['2.7.3.12'] # modify with the actual EC class.
continue
sequences[name].append(line.strip())
# Process fasta files to be in single string - run this part only if the fastas were formated to 60 characters
processed_sequences = {}
for name, sequence in sequences.items():
processed_sequences[f"{sequence[0]};{name}"] = ''.join([x for x in sequence[1:]])
# Shuffle sequences
sequences_list = [(key,value) for key,value in processed_sequences.items()]
random.shuffle(sequences_list)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained('/path/to/ZymCTRL')
# the objective is to get here strings, that when tokenized, will span a window length of 1024.
# for each sequence group its length and untokenized string
print("procesing dataset")
processed_dataset = []
for i in sequences_list:
# length of the control code
label = i[0].split(';')[0]
sequence = i[1].strip()
separator = '<sep>'
control_code_length = len(tokenizer(label+separator)['input_ids'])
available_space = 1021 - control_code_length # It is not 1024 because '<|endoftext|>', and start and end
# Option 1: the sequence is larger than the available space (3-4% of sequences in BRENDA are over 1024)
if len(sequence) > available_space:
total_length = control_code_length + len(sequence[:available_space]) + 1
seq = f"{label}{separator}{sequence[:available_space]}<|endoftext|>"
processed_dataset.append((total_length, seq))
# Option 2 & 3: The sequence fits in the block_size space with or without padding
else:
total_length = control_code_length + len(sequence) + 3
# in this case the sequence does not fit with the start/end tokens
seq = f"{label}{separator}<start>{sequence}<end><|endoftext|>"
processed_dataset.append((total_length, seq))
# Helper function to group sequences
def grouper(iterable):
prev = None
group = ''
total_sum = 0
for item in iterable:
if prev is None or item[0] + total_sum < 1025:
group += item[1]
total_sum += item[0]
else:
total_sum = item[0]
yield group
group = item[1]
prev = item
if group:
total_sum = 0
yield group
# Group sequences
print("grouping processed dataset")
grouped_dataset=dict(enumerate(grouper(processed_dataset),1))
# Save the processed file out
fn = open("./2.7.3.13_processed.txt",'w')
for key,value in grouped_dataset.items():
fn.write(value)
fn.write("\n")
fn.close()
fn = open("./2.7.3.13_processed.txt",'w')
for key,value in grouped_dataset.items():
padding_len = 1024 - len(tokenizer(value)['input_ids'])
padding = "<pad>"*padding_len
print(len(tokenizer(value+padding)['input_ids']))
fn.write(value+padding)
fn.write
fn.write("\n")
fn.close()
```
The previous script will prepare a text file with the correct format for tokenization.
Now we can use the tokenizer to convert its contents to tokens.
```
from datasets import load_dataset
import transformers
from transformers.testing_utils import CaptureLogger
# Load the tokenizer again
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('/agh/projects/noelia/NLP/zymCTRL/dataset_preparation/tokenizer')
#Load the data files
data_files = {}
dataset_args = {}
validation_split_percentage = 10 # for a split 90/10
data_files["train"] = './2.7.3.12_processed.txt'
extension = "text"
raw_datasets = load_dataset(extension, data_files=data_files, cache_dir='.', **dataset_args)
tok_logger = transformers.utils.logging.get_logger("transformers.tokenization_utils_base")
# Load datasets using the HF datasets library:
raw_datasets["train"] = load_dataset(extension,
data_files=data_files,
split=f"train[{validation_split_percentage}%:]",
cache_dir='.',
**dataset_args,)
raw_datasets["validation"] = load_dataset(extension,
data_files=data_files,
split=f"train[:{validation_split_percentage}%]",
cache_dir='.',
**dataset_args,)
def tokenize_function(examples):
" This function tokenizes input"
with CaptureLogger(tok_logger) as cl:
output = tokenizer(examples["text"])
# clm input could be much much longer than block_size
if "Token indices sequence length is longer than the" in cl.out:
tok_logger.warning(
"^^^^^^^^^^^^^^^^ Please ignore the warning above - this long input will be chunked into smaller bits before being passed to the model."
)
return output
# tokenize in parallel
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=32,
remove_columns=['text'],
load_from_cache_file = False,
desc="Running tokenizer on dataset",
)
train_dataset = tokenized_datasets["train"]
eval_dataset = tokenized_datasets["validation"]
train_dataset.save_to_disk('./dataset/train')
eval_dataset.save_to_disk('./dataset/eval')
# This has saved the datasets tokenized. Now we need to group them into the block size of 1024
from datasets import load_from_disk
train_dataset = load_from_disk('./2.7.3.13/dataset/train')
eval_dataset = load_from_disk('./2.7.3.13/dataset/eval')
from datasets.dataset_dict import DatasetDict
tokenized_datasets = DatasetDict()
tokenized_datasets["train"] = train_dataset
tokenized_datasets["validation"] = eval_dataset
block_size = 1024
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop,
# you can customize this part to your needs.
if total_length >= block_size:
total_length = (total_length // block_size) * block_size
# Split by chunks of max_len.
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=124,
load_from_cache_file=False,
desc=f"Grouping texts in chunks of {block_size}",
)
train_dataset = lm_datasets["train"]
eval_dataset = lm_datasets["validation"]
train_dataset.save_to_disk('./dataset/train2')
eval_dataset.save_to_disk('./dataset/eval2')
```
The processed datasets will be inside the folder dataset/, called train2 and eval2.
You could also put the two previous scripts into a single one and run it in one go (that is what we do).
Now you are ready to fine-tune the model.
To do that, you can take the trainer file that we provide in this repository (5.run_clm-post.py), or use the trainer from Hugging Face.
The command below shows an example at an specific learning rate,
but you could try with other hyperparameters to obtain the best training and evaluation losses.
```
python 5.run_clm-post.py --tokenizer_name /path/to/ZymCTRL
--do_train --do_eval --output_dir output --evaluation_strategy steps --eval_steps 10
--logging_steps 5 --save_steps 500 --num_train_epochs 28 --per_device_train_batch_size 1
--per_device_eval_batch_size 4 --cache_dir '.' --save_total_limit 2 --learning_rate 0.8e-04
--dataloader_drop_last True --model_type gpt2 --config_name /path/to/ZymCTRL
--gradient_accumulation_steps 4
```
In any case, the original HuggingFace script run_clm.py can be found here:
https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py
### **Training specs**
The model was trained on 48 NVIDIA A100 GPUs for eight epochs,
using a block size of 1024 and a total batch size of 768.
The optimizer used was Adam (beta1 = 0.9, beta2 = 0.999)
with a learning rate of 0.8e-04.
### **Contact**
We are the AI for Protein Design group at the Institute of Molecular Biology of Barcelona (https://www.aiproteindesign.com/).
For any questions post an issue in this repository so that other people can benefit from the feedback, and I'll get back to you shortly.
We are always open for collaborations, send an email to nfccri [at] ibmb [dot] csic [dot] es |
BigSalmon/NEO125InformalToFormalLincoln | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sagemaker-bert-base-intent1018
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-bert-base-intent1018
This model is a fine-tuned version of [asafaya/bert-base-arabic](https://huggingface.co/asafaya/bert-base-arabic) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0371
- Accuracy: 0.0855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 44 | 4.2225 | 0.0192 |
| No log | 2.0 | 88 | 4.0371 | 0.0855 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
BigSalmon/Points2 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
language:
- ru
tags:
- PyTorch
- GAN
- Handwritten
datasets:
- "sberbank-ai/school_notebooks_RU"
- "sberbank-ai/school_notebooks_EN"
license: mit
---
This is a weights storage for models trained by [ScrabbleGAN](https://github.com/ai-forever/ScrabbleGAN) |
BigSalmon/Rowerta | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
BigSalmon/T52 | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 8 | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sagemaker-bert-base-intent1018_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-bert-base-intent1018_2
This model is a fine-tuned version of [asafaya/bert-base-arabic](https://huggingface.co/asafaya/bert-base-arabic) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5145
- Accuracy: 0.9017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 88 | 4.0951 | 0.0470 |
| No log | 2.0 | 176 | 3.7455 | 0.2158 |
| No log | 3.0 | 264 | 3.0505 | 0.4252 |
| No log | 4.0 | 352 | 2.0489 | 0.6303 |
| No log | 5.0 | 440 | 1.3342 | 0.7735 |
| 2.9556 | 6.0 | 528 | 0.9592 | 0.8162 |
| 2.9556 | 7.0 | 616 | 0.7623 | 0.8162 |
| 2.9556 | 8.0 | 704 | 0.6262 | 0.8547 |
| 2.9556 | 9.0 | 792 | 0.5145 | 0.9017 |
| 2.9556 | 10.0 | 880 | 0.5328 | 0.8846 |
| 2.9556 | 11.0 | 968 | 0.5137 | 0.8932 |
| 0.3206 | 12.0 | 1056 | 0.5190 | 0.8846 |
| 0.3206 | 13.0 | 1144 | 0.5158 | 0.8953 |
| 0.3206 | 14.0 | 1232 | 0.5053 | 0.8974 |
| 0.3206 | 15.0 | 1320 | 0.5140 | 0.8953 |
| 0.3206 | 16.0 | 1408 | 0.5108 | 0.8996 |
| 0.3206 | 17.0 | 1496 | 0.5282 | 0.8932 |
| 0.0381 | 18.0 | 1584 | 0.5278 | 0.8974 |
| 0.0381 | 19.0 | 1672 | 0.5224 | 0.8996 |
| 0.0381 | 20.0 | 1760 | 0.5226 | 0.8996 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
BigSalmon/TS3 | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible",
"has_space"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-finetuned-flower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
BigSalmon/prepositions | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-memes-v3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.847758887171561
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-memes-v3
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3862
- Accuracy: 0.8478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5649 | 0.99 | 40 | 0.6342 | 0.7488 |
| 0.3083 | 1.99 | 80 | 0.4146 | 0.8423 |
| 0.1563 | 2.99 | 120 | 0.3900 | 0.8547 |
| 0.0827 | 3.99 | 160 | 0.3862 | 0.8478 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
BigTooth/Megumin-v0.2 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
language:
- cy
tags:
- punctuation prediction
- punctuation
license: mit
widget:
- text: "A yw'r gweinidog yn cytuno bod angen gwell gwasanaethau yn ne ddwyrain Cymru"
example_title: "Example 1"
- text: "Mae Pwllheli yn dref yng Ngwynedd Gogledd Cymru ac mae Llandrindod ym Mhowys"
example_title: "Example 2"
metrics:
- f1
---
This model predicts the punctuation of Welsh language texts. It has been created to restore punctuation of transcribed from speech recognition models such as https://huggingface.co/techiaith/wav2vec2-xlsr-ft-cy. The model restores the following punctuation markers: "." "," "?" "-" ":"
The model was trained on Welsh texts extracted from the Welsh Parliament / Senedd Record of Proceedings between 1999-2010 and 2016 to the present day. Please note that the training data consists of originally spoken and translated political speeches. Therefore the model might perform differently on texts from other domains.
Based on the work of https://github.com/oliverguhr/fullstop-deep-punctuation-prediction and [softcatala/fullstop-catalan-punctuation-prediction](https://huggingface.co/softcatala/fullstop-catalan-punctuation-prediction)
## Install
To get started install the deepmultilingualpunctuation package from [pypi](https://pypi.org/project/deepmultilingualpunctuation/):
```bash
pip install deepmultilingualpunctuation
```
### Restore Punctuation
```python
from deepmultilingualpunctuation import PunctuationModel
model = PunctuationModel("techiaith/fullstop-welsh-punctuation-prediction")
text = "A yw'r gweinidog yn cytuno bod angen gwell gwasanaethau yn ne ddwyrain Cymru"
result = model.restore_punctuation(text)
print(result)
```
**output**
```
[
{
"entity_group": "LABEL_0",
"score": 0.9999812841415405,
"word": "A yw'r gweinidog yn cytuno bod angen gwell gwasanaethau yn",
"start": 0,
"end": 58
},
{
"entity_group": "LABEL_4",
"score": 0.9787278771400452,
"word": "ne",
"start": 59,
"end": 61
},
{
"entity_group": "LABEL_0",
"score": 0.9999902248382568,
"word": "ddwyrain",
"start": 62,
"end": 70
},
{
"entity_group": "LABEL_3",
"score": 0.9484745860099792,
"word": "Cymru",
"start": 71,
"end": 76
}
]
```
> A yw'r gweinidog yn cytuno bod angen gwell gwasanaethau yn ne-ddwyrain Cymru?
## Results
The model achieves the following F1 scores for the different punctuation markers:
| Label | Precision | Recall | f1-score | Support |
| ------------- | ----- | ----- | ----- | ----- |
| 0 | 0.99 | 0.99 | 0.99 | 5053572 |
| . | 0.89 | 0.88 | 0.88 | 224920 |
| , | 0.83 | 0.82 | 0.82 | 363886 |
| ? | 0.91 | 0.87 | 0.89 | 20762 |
| - | 0.95 | 0.94 | 0.94 | 13161 |
| : | 0.92 | 0.89 | 0.90 | 5274 |
| | | | | |
| accuracy | | | 0.98 | 11012581 |
| macro average | 0.92 | 0.90 | 0.91 | 11012581 |
| weighted average | 0.98 | 0.98 | 0.98 | 11012581 |
##
|
BigeS/DialoGPT-small-Rick | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | The QP model from the paper [Quality Controlled Paraphrase Generation](https://aclanthology.org/2022.acl-long.45/)
Important: read [this](https://github.com/IBM/quality-controlled-paraphrase-generation/issues/5#issuecomment-1238453742) before any use.
More details on the model training and usage see in this [GitHub repo](https://github.com/IBM/quality-controlled-paraphrase-generation). |
BillelBenoudjit/jplu-wikiann | [
"fr",
"dataset:wikiann",
"model-index"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | The QP model from the paper [Quality Controlled Paraphrase Generation](https://aclanthology.org/2022.acl-long.45/)
Important: read [this](https://github.com/IBM/quality-controlled-paraphrase-generation/issues/5#issuecomment-1238453742) before any use.
More details on the model training and usage see in this [GitHub repo](https://github.com/IBM/quality-controlled-paraphrase-generation). |
Bilz/DialoGPT-small-harrypotter | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | The QP model from the paper [Quality Controlled Paraphrase Generation](https://aclanthology.org/2022.acl-long.45/)
Important: read [this](https://github.com/IBM/quality-controlled-paraphrase-generation/issues/5#issuecomment-1238453742) before any use.
More details on the model training and usage see in this [GitHub repo](https://github.com/IBM/quality-controlled-paraphrase-generation). |
Binbin/test | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- cord-layoutlmv3
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-cord_100
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cord-layoutlmv3
type: cord-layoutlmv3
config: cord
split: train
args: cord
metrics:
- name: Precision
type: precision
value: 0.9472118959107807
- name: Recall
type: recall
value: 0.9535928143712575
- name: F1
type: f1
value: 0.9503916449086163
- name: Accuracy
type: accuracy
value: 0.9562818336162988
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-cord_100
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2152
- Precision: 0.9472
- Recall: 0.9536
- F1: 0.9504
- Accuracy: 0.9563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.56 | 250 | 0.9909 | 0.7582 | 0.8099 | 0.7832 | 0.8128 |
| 1.3653 | 3.12 | 500 | 0.5650 | 0.8392 | 0.8675 | 0.8531 | 0.8756 |
| 1.3653 | 4.69 | 750 | 0.3851 | 0.8865 | 0.9177 | 0.9018 | 0.9181 |
| 0.3744 | 6.25 | 1000 | 0.3104 | 0.9280 | 0.9364 | 0.9322 | 0.9380 |
| 0.3744 | 7.81 | 1250 | 0.2778 | 0.9347 | 0.9424 | 0.9385 | 0.9440 |
| 0.1955 | 9.38 | 1500 | 0.2316 | 0.9327 | 0.9446 | 0.9386 | 0.9440 |
| 0.1955 | 10.94 | 1750 | 0.2461 | 0.9414 | 0.9491 | 0.9452 | 0.9533 |
| 0.1349 | 12.5 | 2000 | 0.2316 | 0.9379 | 0.9491 | 0.9435 | 0.9478 |
| 0.1349 | 14.06 | 2250 | 0.2227 | 0.9487 | 0.9551 | 0.9519 | 0.9533 |
| 0.1024 | 15.62 | 2500 | 0.2152 | 0.9472 | 0.9536 | 0.9504 | 0.9563 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
BinksSachary/DialoGPT-small-shaxx | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
inference: false
---
# Setfit Classification Model ON Conversion Dataset With L6 sbert Model as Base
This is a Setfit Model with the L6 model as a Base for classification.
<!--- Describe your model here -->
## Usage (Setfit)
```
pip install setfit
```
Then you can use the model like this:
```python
from setfit import SetFitModel
model = SetFitModel.from_pretrained("nayan06/binary-classifier-conversion-intent-1.1-l6")
prediction = model(['i want to buy thing'])
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2163 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2163,
"warmup_steps": 217,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Dataset Used
https://huggingface.co/datasets/nayan06/conversion1.0
## Citing & Authors
<!--- Describe where people can find more information -->
|
BinksSachary/ShaxxBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Setfit Classification Model ON Conversion Dataset With L12 sbert Model as Base
This is a Setfit Model with the L6 model as a Base for classification.
<!--- Describe your model here -->
## Usage (Setfit)
```
pip install setfit
```
Then you can use the model like this:
```python
from setfit import SetFitModel
model = SetFitModel.from_pretrained("nayan06/binary-classifier-conversion-intent-1.1-l12")
prediction = model(['i want to buy thing'])
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2163 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2163,
"warmup_steps": 217,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Dataset Used
https://huggingface.co/datasets/nayan06/conversion1.0
## Citing & Authors
<!--- Describe where people can find more information --> |
BinksSachary/ShaxxBot2 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
language: amh
tags:
- Amharic
- masked language model
- language model
- Ethiopia
license: cc-by-4.0
widget:
- text: ማስታወሻ የፊታችን እሁድ [MASK]
- text: ጸሃይ መማር [MASK]
---
# AmharicRoBERTa
|
BitanBiswas/mbert-bengali-ner-finetuned-ner | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2022-10-18T11:38:15Z | ---
language: en
thumbnail: http://www.huggingtweets.com/cryptoanglio/1666099242969/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1539914688611459074/jnZfe1Rf_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Anglio.S🟠L (33.3%)</div>
<div style="text-align: center; font-size: 14px;">@cryptoanglio</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Anglio.S🟠L (33.3%).
| Data | Anglio.S🟠L (33.3%) |
| --- | --- |
| Tweets downloaded | 3213 |
| Retweets | 634 |
| Short tweets | 562 |
| Tweets kept | 2017 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2g8dyjwv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cryptoanglio's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3laoj52a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3laoj52a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cryptoanglio')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Blackmist786/DialoGPt-small-transformers4 | [
"pytorch"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Setfit Classification Model ON Conversion Dataset With mpnet sbert Model as Base
This is a Setfit Model with the L6 model as a Base for classification.
<!--- Describe your model here -->
## Usage (Setfit)
```
pip install setfit
```
Then you can use the model like this:
```python
from setfit import SetFitModel
model = SetFitModel.from_pretrained("nayan06/binary-classifier-conversion-intent-1.1-mpnet")
prediction = model(['i want to buy thing'])
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2163 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2163,
"warmup_steps": 217,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Dataset Used
https://huggingface.co/datasets/nayan06/conversion1.0
## Citing & Authors
<!--- Describe where people can find more information --> |
Blazeolmo/Scrabunzi | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/exxonmobil-tencentglobal-wef/1666111008009/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/902558084064616448/YTOCYYnn_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1397133852246646784/Z4XI4oyC_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/565498192171507712/r2Hb2gvX_400x400.png')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ExxonMobil & Tencent 腾讯 & World Economic Forum</div>
<div style="text-align: center; font-size: 14px;">@exxonmobil-tencentglobal-wef</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ExxonMobil & Tencent 腾讯 & World Economic Forum.
| Data | ExxonMobil | Tencent 腾讯 | World Economic Forum |
| --- | --- | --- | --- |
| Tweets downloaded | 3248 | 590 | 3250 |
| Retweets | 209 | 39 | 29 |
| Short tweets | 7 | 1 | 6 |
| Tweets kept | 3032 | 550 | 3215 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/146l36xw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @exxonmobil-tencentglobal-wef's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/kqpaxkc6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/kqpaxkc6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/exxonmobil-tencentglobal-wef')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
BlightZz/DialoGPT-medium-Kurisu | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 19 | null | Access to model Yaswantthhh/autotrain-yash-1801862270 is restricted and you are not in the authorized list. Visit https://huggingface.co/Yaswantthhh/autotrain-yash-1801862270 to ask for access. |
BlightZz/MakiseKurisu | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | Access to model Yaswantthhh/autotrain-yash-1801862271 is restricted and you are not in the authorized list. Visit https://huggingface.co/Yaswantthhh/autotrain-yash-1801862271 to ask for access. |
BlueGamerBeast/DialoGPT-small-joshua | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/yhyxgwy/ddpm-butterflies-128/tensorboard?#scalars)
|
Bman/DialoGPT-medium-harrypotter | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Rocketknight1/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1748
- Validation Loss: 0.0673
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1748 | 0.0673 | 0 |
### Framework versions
- Transformers 4.24.0.dev0
- TensorFlow 2.10.0
- Datasets 2.6.1
- Tokenizers 0.11.0
|
BobBraico/distilbert-base-uncased-finetuned-imdb-accelerate | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- donut
- image-to-text
- vision
- endpoints-template
---
# Fork of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2)
> This is fork of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2) implementing a custom `handler.py` as an example for how to use `donut` models with [inference-endpoints](https://hf.co/inference-endpoints)
---
# Donut (base-sized model, fine-tuned on CORD)
Donut model fine-tuned on CORD. It was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut).
Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder.
# Use with Inference Endpoints
Hugging Face Inference endpoints can directly work with binary data, this means that we can directly send our image from our document to the endpoint. We are going to use requests to send our requests. (make your you have it installed `pip install requests`)

## Send requests with Pyton
load sample image
```bash
wget https://huggingface.co/philschmid/donut-base-finetuned-cord-v2/resolve/main/sample.png
```
send request to endpoint
```python
import json
import requests as r
import mimetypes
ENDPOINT_URL="" # url of your endpoint
HF_TOKEN="" # organization token where you deployed your endpoint
def predict(path_to_image:str=None):
with open(path_to_image, "rb") as i:
b = i.read()
headers= {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": mimetypes.guess_type(path_to_image)[0]
}
response = r.post(ENDPOINT_URL, headers=headers, data=b)
return response.json()
prediction = predict(path_to_image="sample.png")
print(prediction)
# {'menu': [{'nm': '0571-1854 BLUS WANITA',
# 'unitprice': '@120.000',
# 'cnt': '1',
# 'price': '120,000'},
# {'nm': '1002-0060 SHOPPING BAG', 'cnt': '1', 'price': '0'}],
# 'total': {'total_price': '120,000',
# 'changeprice': '0',
# 'creditcardprice': '120,000',
# 'menuqty_cnt': '1'}}
```
**curl example**
```bash
curl https://ak7gduay2ypyr9vp.us-east-1.aws.endpoints.huggingface.cloud \
-X POST \
--data-binary 'sample.png' \
-H "Authorization: Bearer XXX" \
-H "Content-Type: null"
``` |
BogdanKuloren/continual-learning-paper-embeddings-model | [
"pytorch",
"mpnet",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"MPNetModel"
],
"model_type": "mpnet",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.632
- name: F1
type: f1
value: 0.43209876543209874
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6646
- Accuracy: 0.632
- F1: 0.4321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Botjallu/DialoGPT-small-harrypotter | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-10-18T13:42:17Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- german
- nli
- text-classification
---
# airnicco8/xlm-roberta-de
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is trained on the [Ted talks transcripts](https://www.kaggle.com/datasets/rounakbanik/ted-talks) filtered only by German language, the training setting is described [here](https://towardsdatascience.com/a-complete-guide-to-transfer-learning-from-english-to-other-languages-using-sentence-embeddings-8c427f8804a9). It can be used straight-forwardly for sentence similarity, but can also be fine-tuned for NLI and Text-Classification, examples coming soon.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["das ist eine glückliche Frau", "das ist ein glücklicher Mann", "das ist ein glücklicher Hund"]
model = SentenceTransformer('airnicco8/xlm-roberta-de')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["das ist eine glückliche Frau", "das ist ein glücklicher Mann", "das ist ein glücklicher Hund"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('airnicco8/xlm-roberta-de')
model = AutoModel.from_pretrained('airnicco8/xlm-roberta-de')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=airnicco8/xlm-roberta-de)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3071 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Branex/gpt-neo-2.7B | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
---
Various pretrained models and voices for the git [repo](https://github.com/torphix/tts-inference)
Follow instructions on repo readme for useage
|
Brayan/CNN_Brain_Tumor | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Brendan/cse244b-hw2-roberta | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
license: apache-2.0
---
# OFA-huge-vqa
## Introduction
This is the **huge** version of OFA model finetuned for **VQA**. OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image generation, visual grounding, image captioning, image classification, text generation, etc.) to a simple sequence-to-sequence learning framework.
The directory includes 4 files, namely `config.json` which consists of model configuration, `vocab.json` and `merge.txt` for our OFA tokenizer, and lastly `pytorch_model.bin` which consists of model weights. There is no need to worry about the mismatch between Fairseq and transformers, since we have addressed the issue yet.
## How to use
To use it in transformers, please refer to https://github.com/OFA-Sys/OFA/tree/feature/add_transformers. Install the transformers and download the models as shown below.
```
git clone --single-branch --branch feature/add_transformers https://github.com/OFA-Sys/OFA.git
pip install OFA/transformers/
git clone https://huggingface.co/OFA-Sys/OFA-huge-vqa
```
After, refer the path to OFA-large to `ckpt_dir`, and prepare an image for the testing example below. Also, ensure that you have pillow and torchvision in your environment.
```python
>>> from PIL import Image
>>> from torchvision import transforms
>>> from transformers import OFATokenizer, OFAModel
>>> from generate import sequence_generator
>>> mean, std = [0.5, 0.5, 0.5], [0.5, 0.5, 0.5]
>>> resolution = 480
>>> patch_resize_transform = transforms.Compose([
lambda image: image.convert("RGB"),
transforms.Resize((resolution, resolution), interpolation=Image.BICUBIC),
transforms.ToTensor(),
transforms.Normalize(mean=mean, std=std)
])
>>> tokenizer = OFATokenizer.from_pretrained(ckpt_dir)
>>> txt = " what does the image describe?"# or any of your specified questions
>>> inputs = tokenizer([txt], return_tensors="pt").input_ids
>>> img = Image.open(path_to_image)
>>> patch_img = patch_resize_transform(img).unsqueeze(0)
# using the generator of fairseq version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=True)
>>> generator = sequence_generator.SequenceGenerator(
tokenizer=tokenizer,
beam_size=5,
max_len_b=16,
min_len=0,
no_repeat_ngram_size=3,
)
>>> data = {}
>>> data["net_input"] = {"input_ids": inputs, 'patch_images': patch_img, 'patch_masks':torch.tensor([True])}
>>> gen_output = generator.generate([model], data)
>>> gen = [gen_output[i][0]["tokens"] for i in range(len(gen_output))]
# using the generator of huggingface version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=False)
>>> gen = model.generate(inputs, patch_images=patch_img, num_beams=5, no_repeat_ngram_size=3)
>>> print(tokenizer.batch_decode(gen, skip_special_tokens=True))
```
|
BrianTin/MTBERT | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 65.80 +/- 13.11
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **CartPole-v1**
This is a trained model of a DQN agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn.py).
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/vwxyzjn/CartPole-v1-dqn-seed1/raw/main/dqn.py
curl -OL https://huggingface.co/vwxyzjn/CartPole-v1-dqn-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/vwxyzjn/CartPole-v1-dqn-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqn.py --cuda False --save-model --upload-model --total-timesteps 500
```
# Hyperparameters
```python
{'batch_size': 128,
'buffer_size': 10000,
'capture_video': False,
'cuda': False,
'end_e': 0.05,
'env_id': 'CartPole-v1',
'exp_name': 'dqn',
'exploration_fraction': 0.5,
'gamma': 0.99,
'hf_entity': '',
'learning_rate': 0.00025,
'learning_starts': 10000,
'save_model': True,
'seed': 1,
'start_e': 1,
'target_network_frequency': 500,
'torch_deterministic': True,
'total_timesteps': 500,
'track': False,
'train_frequency': 10,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
Brokette/projetCS | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: apache-2.0
tags:
- mlm
- generated_from_trainer
model-index:
- name: article2keyword2.1_barthez-orangesum-title_finetuned_for_mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# article2keyword2.1_barthez-orangesum-title_finetuned_for_mlm
This model is a fine-tuned version of [moussaKam/barthez-orangesum-title](https://huggingface.co/moussaKam/barthez-orangesum-title) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2949 | 1.0 | 1370 | 0.0557 |
| 0.0569 | 2.0 | 2740 | 0.0477 |
| 0.0495 | 3.0 | 4110 | 0.0449 |
| 0.0444 | 4.0 | 5480 | 0.0437 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
BumBelDumBel/ZORK_AI_FANTASY | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: openrail
---
Fusion model fine-tuned on CaseHOLD.
AMRBART is used to receive AMR embeddings. AMR data is generated by Spring AMR parser.
LegalBERT is used to receive text embeddings. |
BumBelDumBel/ZORK_AI_SCIFI | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 196.41 +/- 19.88
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
```
Use the model like this
```python
import gym
from huggingface_sb3 import load_from_hub
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
# Retrieve the model from the hub
## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name})
## filename = name of the model zip file from the repository
checkpoint = load_from_hub(repo_id="ThomasSimonini/ppo-LunarLander-v2", filename="ppo-LunarLander-v2.zip")
model = PPO.load(checkpoint)
# Evaluate the agent
eval_env = gym.make('LunarLander-v2')
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
# Watch the agent play
obs = eval_env.reset()
for i in range(1000):
action, _state = model.predict(obs)
obs, reward, done, info = eval_env.step(action)
eval_env.render()
if done:
obs = eval_env.reset()
eval_env.close()
``` |
BunakovD/sd | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- micole66/autotrain-data-mercuryorsodium
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.3397575484174952
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1804662320
- CO2 Emissions (in grams): 0.3398
## Validation Metrics
- Loss: 0.186
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000 |
CAMeL-Lab/bert-base-arabic-camelbert-ca-ner | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 85 | null | ---
tags:
- vision
- 3D
- 3D object detection
datasets:
- omni3d
metrics:
- AP
---
# 3D Object Detection with Cube R-CNN
3D Object Detection with Cube R-CNN is described in [**Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild**](https://arxiv.org/abs/2207.10660) and released in this [repository](https://github.com/facebookresearch/omni3d)
## Overview
A description of the model and its architecture are shown below
<img src="https://s3.amazonaws.com/moonup/production/uploads/1666115971617-634ededbd049354d7ee4b557.png" width=700px/>
## Training Data
Cube R-CNN was trained on Omni3D, a large benchmark for 3D object detection in the wild.
## Demo: Inference on Any Image
The model detects objects in 3D from a single image. There are 50 distinct object categories including *car, truck, chair, table, cabinet, books, and many more*.
The model assumes known focal length for the image in order to predict the right metric scale.
However, users can provide any focal length and will get predictions on a "relative" scale.
For example, we can predict 3D objects from COCO images with a user-defined focal length of 4.0, as shown below
<img src="https://github.com/facebookresearch/omni3d/blob/main/.github/generalization_coco.png?raw=true" width=500px/>
The above output is produced by our demo
```bash
python demo/demo.py \
--config cubercnn://omni3d/cubercnn_DLA34_FPN.yaml \
--input-folder "datasets/image_inputs" \
--threshold 0.25 --focal 4.0 --display \
MODEL.WEIGHTS cubercnn://omni3d/cubercnn_DLA34_FPN.pth \
OUTPUT_DIR output/demo
```
## Checkpoints
You can find model checkpoints in the original [model zoo](https://github.com/facebookresearch/omni3d/blob/main/MODEL_ZOO.md).
## Intended Use and Limitations
Cube R-CNN is a data-driven method trained on an annotated dataset, Omni3D. The purpose of the project is to advance 3D computer vision and 3D object recognition. The dataset contains a *pedestrian* category, which we acknowledge as a potential issue in the case of unethical applications of our model.
The limitations of our approach are: erroneous predictions especially for far away objects, mistakes in predicting rotations and depth. Our evaluation reports an analysis for various depths and object sizes to better understand performance.
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16,451 | null | ---
language: hu
license: apache-2.0
datasets:
- wikipedia
tags:
- generated_from_keras_callback
- hubert
model-index:
- name: hubert-medium-wiki
results: []
---
# hubert-medium-wiki
This model was trained from scratch on the Wikipedia subset of Hungarian Webcorpus 2.0 with MLM and SOP tasks.
### Pre-Training Parameters:
First phase:
- Training steps: 500.000
- Sequence length: 128
- Batch size: 1024
Second phase:
- Training steps: 100.000
- Sequence length: 512
- Batch size: 384
### Framework versions
- Transformers 4.21.3
- TensorFlow 2.10.0
- Datasets 2.4.0
- Tokenizers 0.12.1
# Acknowledgement
[](https://mi.nemzetilabor.hu/) |
CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 73 | null | ---
language:
- en
thumbnail: "https://s3.amazonaws.com/moonup/production/uploads/1663756797814-62bd5f951e22ec84279820e8.png"
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
datasets:
- eolecvk/naruto-blip-captions
---
# Naruto diffusers (new version available [here](https://huggingface.co/lambdalabs/sd-naruto-diffusers))
__Stable Diffusion fine tuned on Naruto by [Lambda Labs](https://lambdalabs.com/).__
Put in a text prompt and generate your own Naruto character, no "prompt engineering" required!
If you want to find out how to train your own Stable Diffusion variants, see this [example](https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning) from Lambda Labs.

> "Face of President Obama smiling", "Face of President Donald Trump", "Face of President Joe Biden"
## Usage
```bash
!pip install diffusers==0.3.0
!pip install transformers scipy ftfy
```
```python
import torch
from diffusers import StableDiffusionPipeline
from torch import autocast
pipe = StableDiffusionPipeline.from_pretrained("eolecvk/sd-naruto-diffusers", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "Yoda"
scale = 10
n_samples = 4
# Sometimes the nsfw checker is confused by the Naruto images, you can disable
# it at your own risk here
disable_safety = False
if disable_safety:
def null_safety(images, **kwargs):
return images, False
pipe.safety_checker = null_safety
with autocast("cuda"):
images = pipe(n_samples*[prompt], guidance_scale=scale).images
for idx, im in enumerate(images):
im.save(f"{idx:06}.png")
```
## Model description
Trained on [BLIP captioned Pokémon images](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) using 2xA6000 GPUs on [Lambda GPU Cloud](https://lambdalabs.com/service/gpu-cloud) for around 30,000 step (about 12 hours, at a cost of about $20).
## Links
- [Lambda Diffusers](https://github.com/LambdaLabsML/lambda-diffusers)
- [Captioned Naruto dataset](https://huggingface.co/datasets/eolecvk/naruto-blip-captions)
- [Model weights in Diffusers format](https://huggingface.co/eolecvk/sd-naruto-diffusers)
- [Original model weights](https://huggingface.co/justinpinkney/pokemon-stable-diffusion)
- [Training code](https://github.com/justinpinkney/stable-diffusion)
Trained by Eole Cervenka after the work of [Justin Pinkney](justinpinkney.com) ([@Buntworthy](https://twitter.com/Buntworthy)) at [Lambda Labs](https://lambdalabs.com/). |
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 32 | 2022-10-18T18:04:15Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.8766233766233766
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3365
- Accuracy: 0.8733
- F1: 0.8766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
CLAck/indo-mixed | [
"pytorch",
"marian",
"text2text-generation",
"en",
"id",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | null | ---
license: mit
---
Welcome to the COVID-19 Misinformation Detector!
There is a lot of misinformation related to the COVID-19 vaccine being posted online from unreliable sources. The COVID-19 Misinformation Detector allows you to check if the information you are reading online (e.g. from Twitter or Facebook) contains misinformation or not!
Enter the text from the online post in the "Hosted inference API" text area to the right to check if it is misinformation. "LABEL_0" means that no misinformation was detected in the post, while "LABEL_1" means that the post is misinformation.
The COVID-19 Misinformation Detector is a modified version of the "bert-base-uncased" transformer model, found [here](https://huggingface.co/bert-base-uncased). It is fine-tuned on two datasets containing tweets relating to the COVID-19 pandemic; each tweet is labelled as containing misinformation (1) or not (0), as verified by healthcare experts.
The datasets used are:
1. [ANTi-Vax: a novel Twitter dataset for COVID-19 vaccine misinformation detection](https://www.sciencedirect.com/science/article/pii/S0033350621004534)
2. [CoAID (Covid-19 HeAlthcare mIsinformation Dataset)](https://arxiv.org/abs/2006.00885)
For a more detailed explanation, check out the technical report [here](https://drive.google.com/file/d/1QW9D6TN4KXX6poa6Q5L6FVgqaDQ4DxY9/view?usp=sharing), and check out my literature review on transformers [here](https://drive.google.com/file/d/1d5tK3sUwYM1WBheOuNG9A7ZYri2zxdyw/view?usp=sharing)!
|
CLAck/vi-en | [
"pytorch",
"marian",
"text2text-generation",
"en",
"vi",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6406
- Train End Logits Accuracy: 0.5766
- Train Start Logits Accuracy: 0.5397
- Validation Loss: 1.2711
- Validation End Logits Accuracy: 0.6595
- Validation Start Logits Accuracy: 0.6190
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2766, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.6406 | 0.5766 | 0.5397 | 1.2711 | 0.6595 | 0.6190 | 0 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
CLS/WubiBERT_models | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Aitor/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
CLTL/icf-domains | [
"pytorch",
"roberta",
"nl",
"transformers",
"license:mit",
"text-classification"
]
| text-classification | {
"architectures": [
"RobertaForMultiLabelSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 35 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.76
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Aitor/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
CLTL/icf-levels-etn | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
### Large-Scale Pre-Training for Goal-Directed Dialog (GODEL)
GODEL is a large-scale pre-trained model for goal-directed dialogs. It is parameterized with a Transformer-based encoder-decoder model and trained for response generation grounded in external text, which allows more effective fine-tuning on dialog tasks that require conditioning the response on information that is external to the current conversation (e.g., a retrieved document). The pre-trained model can be efficiently fine-tuned and adapted to accomplish a new dialog task with a handful of task-specific dialogs. The v1.1 model is trained on 551M multi-turn dialogs from Reddit discussion thread, and 5M instruction and knowledge grounded dialogs.
##### Multi-turn generation examples from an interactive environment:
Chitchat example:
> Instruction: given a dialog context, you need to response empathically. <br>
> User: Does money buy happiness? <br>
> Agent: It is a question. Money buys you a lot of things, but not enough to buy happiness. <br>
> User: What is the best way to buy happiness ? <br>
> Agent: Happiness is bought through your experience and not money. <br>
Grounded response generation example:
> Instruction: given a dialog context and related knowledge, you need to response safely based on the knowledge. <br>
> Knowledge: The best Stardew Valley mods PCGamesN_0 / About SMAPI <br>
> User: My favorite game is stardew valley. stardew valley is very fun. <br>
> Agent: I love Stardew Valley mods, like PCGamesN_0 / About SMAPI. <br>
Please find the information about preprocessing, training and full details of the GODEL in the [project webpage](https://aka.ms/GODEL).
ArXiv paper: [https://arxiv.org/abs/2206.11309](https://arxiv.org/abs/2206.11309)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("microsoft/GODEL-v1_1-base-seq2seq")
model = AutoModelForSeq2SeqLM.from_pretrained("microsoft/GODEL-v1_1-base-seq2seq")
def generate(instruction, knowledge, dialog):
if knowledge != '':
knowledge = '[KNOWLEDGE] ' + knowledge
dialog = ' EOS '.join(dialog)
query = f"{instruction} [CONTEXT] {dialog} {knowledge}"
input_ids = tokenizer(f"{query}", return_tensors="pt").input_ids
outputs = model.generate(input_ids, max_length=128, min_length=8, top_p=0.9, do_sample=True)
output = tokenizer.decode(outputs[0], skip_special_tokens=True)
return output
# Instruction for a chitchat task
instruction = f'Instruction: given a dialog context, you need to response empathically.'
# Leave the knowldge empty
knowledge = ''
dialog = [
'Does money buy happiness?',
'It is a question. Money buys you a lot of things, but not enough to buy happiness.',
'What is the best way to buy happiness ?'
]
response = generate(instruction, knowledge, dialog)
print(response)
```
### Citation
if you use this code and data in your research, please cite our arxiv paper:
```
@misc{peng2022godel,
author = {Peng, Baolin and Galley, Michel and He, Pengcheng and Brockett, Chris and Liden, Lars and Nouri, Elnaz and Yu, Zhou and Dolan, Bill and Gao, Jianfeng},
title = {GODEL: Large-Scale Pre-training for Goal-Directed Dialog},
howpublished = {arXiv},
year = {2022},
month = {June},
url = {https://www.microsoft.com/en-us/research/publication/godel-large-scale-pre-training-for-goal-directed-dialog/},
}
``` |
CLTL/icf-levels-fac | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 32 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
CM-CA/DialoGPT-small-cartman | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: "en" # Example: en
license: "cc-by-4.0" # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses
library_name: "transformers" # Optional. Example: keras or any library from https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Libraries.ts
---
# Model description
This is the T5-3B model for System 1 as described in our paper Just-DREAM-about-it: Figurative Language Understanding with DREAM-FLUTE, FigLang workshop @ EMNLP 2022 (Arxiv link: https://arxiv.org/abs/2210.16407)
System 1: Using original data
Given the <Premise, Hypothesis, Label, Explanation> in the original data, we first trained a sequence-to-sequence model for the figurative language NLI task
using the following input-output format:
```
Input <Premise> <Hypothesis>
Output <Label> <Explanation>
```
# How to use this model?
We provide a quick example of how you can try out System 1 in our paper with just a few lines of code:
```
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> model = AutoModelForSeq2SeqLM.from_pretrained("allenai/System1_FigLang2022")
>>> tokenizer = AutoTokenizer.from_pretrained("t5-3b")
>>> input_string = "Premise: My neighbor actually purchased a dream car of mine and I see it parked in his driveway everyday just taunting me. Hypothesis: My neighbor's new car is exactly my dream car, and I feel so happy every time I see it parked in his driveway. Is there a contradiction or entailment between the premise and hypothesis?"
>>> input_ids = tokenizer.encode(input_string, return_tensors="pt")
>>> output = model.generate(input_ids, max_length=200)
>>> tokenizer.batch_decode(output, skip_special_tokens=True)
["Answer : Contradiction. Explanation : Most people would not be happy to see someone else's new car that they cannot afford because it is way out of their budget"]
```
# More details about DREAM-FLUTE ...
For more details about DREAM-FLUTE, please refer to our:
* 📄Paper: https://arxiv.org/abs/2210.16407
* 💻GitHub Repo: https://github.com/allenai/dream/
This model is part of our DREAM-series of works. This is a line of research where we make use of scene elaboration for building a "mental model" of situation given in text. Check out our GitHub Repo for more!
# More details about this model ...
## Training and evaluation data
We use the FLUTE dataset for the FigLang2022SharedTask (https://huggingface.co/datasets/ColumbiaNLP/FLUTE) for training this model. ∼7500 samples are provided as the training set. We used a 80-20 split to create our own training (6027 samples) and validation (1507 samples) partitions on which we build our models. For details on how we make use of the training data provided in the FigLang2022 shared task, please refer to https://github.com/allenai/dream/blob/main/FigLang2022SharedTask/Process_Data_Train_Dev_split.ipynb.
## Model details
This model is a fine-tuned version of [t5-3b](https://huggingface.co/t5-3b).
It achieves the following results on the evaluation set:
- Loss: 0.7602
- Rouge1: 58.1212
- Rouge2: 38.1109
- Rougel: 52.1198
- Rougelsum: 52.092
- Gen Len: 40.4851
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.0017 | 0.33 | 1000 | 0.8958 | 40.072 | 27.6729 | 38.429 | 38.4023 | 19.0 |
| 0.9054 | 0.66 | 2000 | 0.8336 | 41.4505 | 29.2616 | 39.5164 | 39.4976 | 19.0 |
| 0.8777 | 1.0 | 3000 | 0.7863 | 41.4221 | 29.6675 | 39.6719 | 39.6627 | 19.0 |
| 0.5608 | 1.33 | 4000 | 0.8007 | 41.1495 | 29.9008 | 39.5706 | 39.5554 | 19.0 |
| 0.5594 | 1.66 | 5000 | 0.7785 | 41.3834 | 30.2818 | 39.8259 | 39.8324 | 19.0 |
| 0.5498 | 1.99 | 6000 | 0.7602 | 41.6364 | 30.6513 | 40.1522 | 40.1332 | 19.0 |
| 0.3398 | 2.32 | 7000 | 0.8580 | 41.4948 | 30.7467 | 40.0274 | 40.0116 | 18.9954 |
| 0.3518 | 2.65 | 8000 | 0.8430 | 41.7283 | 31.178 | 40.3487 | 40.3328 | 18.9861 |
| 0.3465 | 2.99 | 9000 | 0.8405 | 41.956 | 31.527 | 40.5671 | 40.5517 | 18.9907 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CNT-UPenn/Bio_ClinicalBERT_for_seizureFreedom_classification | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | 2022-10-18T22:45:06Z | ---
language: "en" # Example: en
license: "cc-by-4.0" # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses
library_name: "transformers" # Optional. Example: keras or any library from https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Libraries.ts
---
# Model description
This is the T5-3B model for System 2 as described in our paper Just-DREAM-about-it: Figurative Language Understanding with DREAM-FLUTE, FigLang workshop @ EMNLP 2022 (Arxiv link: https://arxiv.org/abs/2210.16407)
System 2: Jointly predicting the type of figurative language
Using type of figurative language provided as part of the training set (Chakrabarty et al., 2022), one of our models jointly predicts the type of figurative language, together with the target label and explanation:
```
Input <Premise> <Hypothesis>
Output <Figurative-Language-Type> <Label> <Explanation>
```
# How to use this model?
We provide a quick example of how you can try out System 2 in our paper with just a few lines of code:
```
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> model = AutoModelForSeq2SeqLM.from_pretrained("allenai/System2_FigLang2022")
>>> tokenizer = AutoTokenizer.from_pretrained("t5-3b")
>>> input_string = "Premise: Yesterday two gangs were fighting just in front of my home. Hypothesis: Yesterday I saw two gangs fighting right in front of my house and it totally didn't make me scared at all. What is the type of figurative language involved? Is there a contradiction or entailment between the premise and hypothesis?"
>>> input_ids = tokenizer.encode(input_string, return_tensors="pt")
>>> output = model.generate(input_ids, max_length=200)
>>> tokenizer.batch_decode(output, skip_special_tokens=True)
['Answer : [Type] Sarcasm [Label] Contradiction. Explanation : Seeing two gangs of people fighting in public can be really dangerous and scary, so someone who claims that they were not scared at all is being sarcastic.']
```
# More details about DREAM-FLUTE ...
For more details about DREAM-FLUTE, please refer to our:
* 📄Paper: https://arxiv.org/abs/2210.16407
* 💻GitHub Repo: https://github.com/allenai/dream/
This model is part of our DREAM-series of works. This is a line of research where we make use of scene elaboration for building a "mental model" of situation given in text. Check out our GitHub Repo for more!
# More details about this model ...
## Training and evaluation data
We use the FLUTE dataset for the FigLang2022SharedTask (https://huggingface.co/datasets/ColumbiaNLP/FLUTE) for training this model. ∼7500 samples are provided as the training set. We used a 80-20 split to create our own training (6027 samples) and validation (1507 samples) partitions on which we build our models. For details on how we make use of the training data provided in the FigLang2022 shared task, please refer to https://github.com/allenai/dream/blob/main/FigLang2022SharedTask/Process_Data_Train_Dev_split.ipynb.
## Model details
This model is a fine-tuned version of [t5-3b](https://huggingface.co/t5-3b).
It achieves the following results on the evaluation set:
- Loss: 0.6078
- Rouge1: 62.8674
- Rouge2: 45.0585
- Rougel: 57.5618
- Rougelsum: 57.5172
- Gen Len: 50.7558
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.8068 | 0.33 | 1000 | 0.7251 | 30.6353 | 25.0792 | 30.619 | 30.6274 | 19.0 |
| 0.7276 | 0.66 | 2000 | 0.6715 | 30.8651 | 26.1492 | 30.8543 | 30.8519 | 19.0 |
| 0.7063 | 1.0 | 3000 | 0.6338 | 31.0263 | 26.6749 | 31.0094 | 31.0098 | 19.0 |
| 0.4516 | 1.33 | 4000 | 0.6447 | 30.9942 | 26.5984 | 30.9834 | 30.9778 | 19.0 |
| 0.4538 | 1.66 | 5000 | 0.6183 | 31.0179 | 26.7012 | 31.005 | 31.0018 | 19.0 |
| 0.4373 | 1.99 | 6000 | 0.6078 | 31.0085 | 26.7116 | 30.9952 | 30.9894 | 19.0 |
| 0.2743 | 2.32 | 7000 | 0.6910 | 31.0051 | 26.7349 | 30.9975 | 30.9851 | 19.0 |
| 0.2819 | 2.65 | 8000 | 0.6831 | 31.0876 | 26.848 | 31.0766 | 31.0753 | 19.0 |
| 0.2849 | 2.99 | 9000 | 0.6673 | 30.9223 | 26.5899 | 30.9165 | 30.9073 | 19.0 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CNT-UPenn/RoBERTa_for_seizureFrequency_QA | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | Access to model sd-dreambooth-library/snowvelvet is restricted and you are not in the authorized list. Visit https://huggingface.co/sd-dreambooth-library/snowvelvet to ask for access. |
CSResearcher/TestModel | [
"license:mit"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-10-18T22:51:06Z | ---
language: "en" # Example: en
license: "cc-by-4.0" # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses
library_name: "transformers" # Optional. Example: keras or any library from https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Libraries.ts
---
# Model description
This is the T5-3B model for System 3 DREAM-FLUTE (emotion), as described in our paper Just-DREAM-about-it: Figurative Language Understanding with DREAM-FLUTE, FigLang workshop @ EMNLP 2022 (Arxiv link: https://arxiv.org/abs/2210.16407)
Systems 3: DREAM-FLUTE - Providing DREAM’s different dimensions as input context
We adapt DREAM’s scene elaborations (Gu et al., 2022) for the figurative language understanding NLI task by using the DREAM model to generate elaborations for the premise and hypothesis separately. This allows us to investigate if similarities or differences in the scene elaborations for the premise and hypothesis will provide useful signals for entailment/contradiction label prediction and improving explanation quality. The input-output format is:
```
Input <Premise> <Premise-elaboration-from-DREAM> <Hypothesis> <Hypothesis-elaboration-from-DREAM>
Output <Label> <Explanation>
```
where the scene elaboration dimensions from DREAM are: consequence, emotion, motivation, and social norm. We also consider a system incorporating all these dimensions as additional context.
In this model, DREAM-FLUTE (emotion), we use elaborations along the "emotion" dimension. For more details on DREAM, please refer to DREAM: Improving Situational QA by First Elaborating the Situation, NAACL 2022 (Arxiv link: https://arxiv.org/abs/2112.08656, ACL Anthology link: https://aclanthology.org/2022.naacl-main.82/).
# How to use this model?
We provide a quick example of how you can try out DREAM-FLUTE (emotion) in our paper with just a few lines of code:
```
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> model = AutoModelForSeq2SeqLM.from_pretrained("allenai/System3_DREAM_FLUTE_emotion_FigLang2022")
>>> tokenizer = AutoTokenizer.from_pretrained("t5-3b")
>>> input_string = "Premise: we laid in the field of green grass and relaxed. [Premise - emotion] I (myself)'s emotion is happy. Hypothesis: we laid in fields of gold. [Hypothesis - emotion] I (myself)'s emotion is happy. Is there a contradiction or entailment between the premise and hypothesis?"
>>> input_ids = tokenizer.encode(input_string, return_tensors="pt")
>>> output = model.generate(input_ids, max_length=200)
>>> tokenizer.batch_decode(output, skip_special_tokens=True)
['Answer : Entailment. Explanation : Gold is a color that is associated with happiness, so the fields of gold are associated with happiness.']
```
# More details about DREAM-FLUTE ...
For more details about DREAM-FLUTE, please refer to our:
* 📄Paper: https://arxiv.org/abs/2210.16407
* 💻GitHub Repo: https://github.com/allenai/dream/
This model is part of our DREAM-series of works. This is a line of research where we make use of scene elaboration for building a "mental model" of situation given in text. Check out our GitHub Repo for more!
# More details about this model ...
## Training and evaluation data
We use the FLUTE dataset for the FigLang2022SharedTask (https://huggingface.co/datasets/ColumbiaNLP/FLUTE) for training this model. ∼7500 samples are provided as the training set. We used a 80-20 split to create our own training (6027 samples) and validation (1507 samples) partitions on which we build our models. For details on how we make use of the training data provided in the FigLang2022 shared task, please refer to https://github.com/allenai/dream/blob/main/FigLang2022SharedTask/Process_Data_Train_Dev_split.ipynb.
## Model details
This model is a fine-tuned version of [t5-3b](https://huggingface.co/t5-3b).
It achieves the following results on the evaluation set:
- Loss: 0.7557
- Rouge1: 58.5894
- Rouge2: 38.6
- Rougel: 52.5083
- Rougelsum: 52.4698
- Gen Len: 40.5607
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.9974 | 0.33 | 1000 | 0.8938 | 39.8909 | 27.4849 | 38.2724 | 38.2772 | 18.9987 |
| 0.8991 | 0.66 | 2000 | 0.8294 | 41.2504 | 29.3637 | 39.4768 | 39.478 | 18.9987 |
| 0.8778 | 1.0 | 3000 | 0.7886 | 41.3175 | 29.7998 | 39.5926 | 39.5752 | 19.0 |
| 0.5592 | 1.33 | 4000 | 0.7973 | 41.0529 | 30.2234 | 39.5836 | 39.5931 | 19.0 |
| 0.5608 | 1.66 | 5000 | 0.7784 | 41.6251 | 30.6274 | 40.0233 | 39.9929 | 19.0 |
| 0.5433 | 1.99 | 6000 | 0.7557 | 41.8485 | 30.7651 | 40.3159 | 40.2707 | 19.0 |
| 0.3363 | 2.32 | 7000 | 0.8384 | 41.4456 | 30.8368 | 39.9368 | 39.9349 | 19.0 |
| 0.3434 | 2.65 | 8000 | 0.8529 | 41.7845 | 31.3056 | 40.3295 | 40.339 | 18.9920 |
| 0.3548 | 2.99 | 9000 | 0.8310 | 41.9755 | 31.601 | 40.4929 | 40.5058 | 18.9954 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CSZay/bart | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: "en" # Example: en
license: "cc-by-4.0" # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses
library_name: "transformers" # Optional. Example: keras or any library from https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Libraries.ts
---
# Model description
This is the T5-3B model for System 3 DREAM-FLUTE (motivation), as described in our paper Just-DREAM-about-it: Figurative Language Understanding with DREAM-FLUTE, FigLang workshop @ EMNLP 2022 (Arxiv link: https://arxiv.org/abs/2210.16407)
Systems 3: DREAM-FLUTE - Providing DREAM’s different dimensions as input context
We adapt DREAM’s scene elaborations (Gu et al., 2022) for the figurative language understanding NLI task by using the DREAM model to generate elaborations for the premise and hypothesis separately. This allows us to investigate if similarities or differences in the scene elaborations for the premise and hypothesis will provide useful signals for entailment/contradiction label prediction and improving explanation quality. The input-output format is:
```
Input <Premise> <Premise-elaboration-from-DREAM> <Hypothesis> <Hypothesis-elaboration-from-DREAM>
Output <Label> <Explanation>
```
where the scene elaboration dimensions from DREAM are: consequence, emotion, motivation, and social norm. We also consider a system incorporating all these dimensions as additional context.
In this model, DREAM-FLUTE (motivation), we use elaborations along the "motivation" dimension. For more details on DREAM, please refer to DREAM: Improving Situational QA by First Elaborating the Situation, NAACL 2022 (Arxiv link: https://arxiv.org/abs/2112.08656, ACL Anthology link: https://aclanthology.org/2022.naacl-main.82/).
# How to use this model?
We provide a quick example of how you can try out DREAM-FLUTE (motivation) in our paper with just a few lines of code:
```
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> model = AutoModelForSeq2SeqLM.from_pretrained("allenai/System3_DREAM_FLUTE_motivation_FigLang2022")
>>> tokenizer = AutoTokenizer.from_pretrained("t5-3b")
>>> input_string = "Premise: After years of service and contribution to the company, he was finally promoted. [Premise - motivation] Company's motivation is to recognize his hard work. Hypothesis: The company released him after many years of service. [Hypothesis - motivation] Company's motivation is to get someone else to work. Is there a contradiction or entailment between the premise and hypothesis?"
>>> input_ids = tokenizer.encode(input_string, return_tensors="pt")
>>> output = model.generate(input_ids, max_length=200)
>>> tokenizer.batch_decode(output, skip_special_tokens=True)
['Answer : Contradiction. Explanation : To release someone means to let them go from a position, while to promote someone means to give them a higher position.']
```
# More details about DREAM-FLUTE ...
For more details about DREAM-FLUTE, please refer to our:
* 📄Paper: https://arxiv.org/abs/2210.16407
* 💻GitHub Repo: https://github.com/allenai/dream/
This model is part of our DREAM-series of works. This is a line of research where we make use of scene elaboration for building a "mental model" of situation given in text. Check out our GitHub Repo for more!
# More details about this model ...
## Training and evaluation data
We use the FLUTE dataset for the FigLang2022SharedTask (https://huggingface.co/datasets/ColumbiaNLP/FLUTE) for training this model. ∼7500 samples are provided as the training set. We used a 80-20 split to create our own training (6027 samples) and validation (1507 samples) partitions on which we build our models. For details on how we make use of the training data provided in the FigLang2022 shared task, please refer to https://github.com/allenai/dream/blob/main/FigLang2022SharedTask/Process_Data_Train_Dev_split.ipynb.
## Model details
This model is a fine-tuned version of [t5-3b](https://huggingface.co/t5-3b).
It achieves the following results on the evaluation set:
- Loss: 0.7515
- Rouge1: 58.2308
- Rouge2: 38.281
- Rougel: 52.0293
- Rougelsum: 52.0425
- Gen Len: 40.5912
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.9996 | 0.33 | 1000 | 0.8940 | 39.4794 | 27.1937 | 37.848 | 37.8416 | 18.9980 |
| 0.9033 | 0.66 | 2000 | 0.8276 | 41.3816 | 29.2481 | 39.5342 | 39.514 | 18.9987 |
| 0.8713 | 1.0 | 3000 | 0.7840 | 41.392 | 29.7142 | 39.642 | 39.6292 | 19.0 |
| 0.5631 | 1.33 | 4000 | 0.8079 | 41.1312 | 29.9449 | 39.5757 | 39.5775 | 19.0 |
| 0.5577 | 1.66 | 5000 | 0.7781 | 41.4609 | 30.3437 | 39.9114 | 39.8902 | 19.0 |
| 0.5426 | 1.99 | 6000 | 0.7515 | 41.9285 | 30.9247 | 40.3207 | 40.3087 | 19.0 |
| 0.33 | 2.32 | 7000 | 0.8601 | 41.6921 | 30.9567 | 40.1845 | 40.1805 | 18.9954 |
| 0.3442 | 2.65 | 8000 | 0.8910 | 41.5437 | 31.0748 | 40.1653 | 40.1579 | 18.9861 |
| 0.3474 | 2.99 | 9000 | 0.8354 | 41.8455 | 31.4446 | 40.5079 | 40.5116 | 18.9907 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CTBC/ATS | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-10-18T23:02:08Z | ---
language: "en" # Example: en
license: "cc-by-4.0" # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses
library_name: "transformers" # Optional. Example: keras or any library from https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Libraries.ts
---
# Model description
This is the T5-3B model for System 3 DREAM-FLUTE (consequence), as described in our paper Just-DREAM-about-it: Figurative Language Understanding with DREAM-FLUTE, FigLang workshop @ EMNLP 2022 (Arxiv link: https://arxiv.org/abs/2210.16407)
Systems 3: DREAM-FLUTE - Providing DREAM’s different dimensions as input context
We adapt DREAM’s scene elaborations (Gu et al., 2022) for the figurative language understanding NLI task by using the DREAM model to generate elaborations for the premise and hypothesis separately. This allows us to investigate if similarities or differences in the scene elaborations for the premise and hypothesis will provide useful signals for entailment/contradiction label prediction and improving explanation quality. The input-output format is:
```
Input <Premise> <Premise-elaboration-from-DREAM> <Hypothesis> <Hypothesis-elaboration-from-DREAM>
Output <Label> <Explanation>
```
where the scene elaboration dimensions from DREAM are: consequence, emotion, motivation, and social norm. We also consider a system incorporating all these dimensions as additional context.
In this model, DREAM-FLUTE (consequence), we use elaborations along the "likely consequence" dimension. For more details on DREAM, please refer to DREAM: Improving Situational QA by First Elaborating the Situation, NAACL 2022 (Arxiv link: https://arxiv.org/abs/2112.08656, ACL Anthology link: https://aclanthology.org/2022.naacl-main.82/).
# How to use this model?
We provide a quick example of how you can try out DREAM-FLUTE (consequence) in our paper with just a few lines of code:
```
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> model = AutoModelForSeq2SeqLM.from_pretrained("allenai/System3_DREAM_FLUTE_consequence_FigLang2022")
>>> tokenizer = AutoTokenizer.from_pretrained("t5-3b")
>>> input_string = "Premise: My decision-making skills are not purely based on emotions and gut. [Premise - likely consequence] I make more balanced and informed decisions. Hypothesis: My personal feelings color my judgment in this case. [Hypothesis - likely consequence] I make a decision that is not in the best interests of the company. Is there a contradiction or entailment between the premise and hypothesis?"
>>> input_ids = tokenizer.encode(input_string, return_tensors="pt")
>>> output = model.generate(input_ids, max_length=200)
>>> tokenizer.batch_decode(output, skip_special_tokens=True)
["Answer : Contradiction. Explanation : To have personal feelings color one's judgment means to make decisions based on them, but this context describes making decisions based on facts and not emotions"]
```
# More details about DREAM-FLUTE ...
For more details about DREAM-FLUTE, please refer to our:
* 📄Paper: https://arxiv.org/abs/2210.16407
* 💻GitHub Repo: https://github.com/allenai/dream/
This model is part of our DREAM-series of works. This is a line of research where we make use of scene elaboration for building a "mental model" of situation given in text. Check out our GitHub Repo for more!
# More details about this model ...
## Training and evaluation data
We use the FLUTE dataset for the FigLang2022SharedTask (https://huggingface.co/datasets/ColumbiaNLP/FLUTE) for training this model. ∼7500 samples are provided as the training set. We used a 80-20 split to create our own training (6027 samples) and validation (1507 samples) partitions on which we build our models. For details on how we make use of the training data provided in the FigLang2022 shared task, please refer to https://github.com/allenai/dream/blob/main/FigLang2022SharedTask/Process_Data_Train_Dev_split.ipynb.
## Model details
This model is a fine-tuned version of [t5-3b](https://huggingface.co/t5-3b).
It achieves the following results on the evaluation set:
- Loss: 0.7505
- Rouge1: 58.425
- Rouge2: 38.2333
- Rougel: 52.1326
- Rougelsum: 52.1316
- Gen Len: 41.0909
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.9958 | 0.33 | 1000 | 0.8928 | 39.7038 | 27.4256 | 38.1226 | 38.1237 | 19.0 |
| 0.8973 | 0.66 | 2000 | 0.8252 | 41.4862 | 29.5302 | 39.6228 | 39.5913 | 18.9987 |
| 0.8837 | 1.0 | 3000 | 0.7854 | 41.2109 | 29.7022 | 39.6115 | 39.5989 | 19.0 |
| 0.5656 | 1.33 | 4000 | 0.8016 | 41.0368 | 29.76 | 39.4324 | 39.4341 | 19.0 |
| 0.5598 | 1.66 | 5000 | 0.7802 | 41.6073 | 30.3183 | 39.9937 | 39.9743 | 19.0 |
| 0.5495 | 1.99 | 6000 | 0.7505 | 41.7965 | 30.6031 | 40.1514 | 40.1509 | 19.0 |
| 0.3341 | 2.32 | 7000 | 0.8518 | 41.6758 | 30.9028 | 40.134 | 40.1415 | 18.9954 |
| 0.3493 | 2.65 | 8000 | 0.8544 | 41.5856 | 31.1526 | 40.154 | 40.1726 | 18.9940 |
| 0.3535 | 2.99 | 9000 | 0.8291 | 41.9552 | 31.4885 | 40.5239 | 40.5235 | 19.0 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CZWin32768/xlm-align | [
"pytorch",
"xlm-roberta",
"fill-mask",
"arxiv:2106.06381",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
language: "en" # Example: en
license: "cc-by-4.0" # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses
library_name: "transformers" # Optional. Example: keras or any library from https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Libraries.ts
---
# Model description
This is the T5-3B model for System 3 DREAM-FLUTE (social norm), as described in our paper Just-DREAM-about-it: Figurative Language Understanding with DREAM-FLUTE, FigLang workshop @ EMNLP 2022 (Arxiv link: https://arxiv.org/abs/2210.16407)
Systems 3: DREAM-FLUTE - Providing DREAM’s different dimensions as input context
We adapt DREAM’s scene elaborations (Gu et al., 2022) for the figurative language understanding NLI task by using the DREAM model to generate elaborations for the premise and hypothesis separately. This allows us to investigate if similarities or differences in the scene elaborations for the premise and hypothesis will provide useful signals for entailment/contradiction label prediction and improving explanation quality. The input-output format is:
```
Input <Premise> <Premise-elaboration-from-DREAM> <Hypothesis> <Hypothesis-elaboration-from-DREAM>
Output <Label> <Explanation>
```
where the scene elaboration dimensions from DREAM are: consequence, emotion, motivation, and social norm. We also consider a system incorporating all these dimensions as additional context.
In this model, DREAM-FLUTE (social norm), we use elaborations along the "social norm" dimension. For more details on DREAM, please refer to DREAM: Improving Situational QA by First Elaborating the Situation, NAACL 2022 (Arxiv link: https://arxiv.org/abs/2112.08656, ACL Anthology link: https://aclanthology.org/2022.naacl-main.82/).
# How to use this model?
We provide a quick example of how you can try out DREAM-FLUTE (social norm) in our paper with just a few lines of code:
```
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> model = AutoModelForSeq2SeqLM.from_pretrained("allenai/System3_DREAM_FLUTE_social_norm_FigLang2022")
>>> tokenizer = AutoTokenizer.from_pretrained("t5-3b")
>>> input_string = "Premise: Sure ,he snorted just to make me feel even better about the already great situation. [Premise - social norm] It's good to make people feel better about a situation. Hypothesis: Sure, he snorted, just rub it in. [Hypothesis - social norm] It's rude to rub something in someone's face when they don't want to. Is there a contradiction or entailment between the premise and hypothesis?"
>>> input_ids = tokenizer.encode(input_string, return_tensors="pt")
>>> output = model.generate(input_ids, max_length=200)
>>> tokenizer.batch_decode(output, skip_special_tokens=True)
['Answer : Contradiction. Explanation : To rub it in means to make someone feel bad about themselves, but in this sentence he is making the speaker feel better about the already great situation.']
```
# More details about DREAM-FLUTE ...
For more details about DREAM-FLUTE, please refer to our:
* 📄Paper: https://arxiv.org/abs/2210.16407
* 💻GitHub Repo: https://github.com/allenai/dream/
This model is part of our DREAM-series of works. This is a line of research where we make use of scene elaboration for building a "mental model" of situation given in text. Check out our GitHub Repo for more!
# More details about this model ...
## Training and evaluation data
We use the FLUTE dataset for the FigLang2022SharedTask (https://huggingface.co/datasets/ColumbiaNLP/FLUTE) for training this model. ∼7500 samples are provided as the training set. We used a 80-20 split to create our own training (6027 samples) and validation (1507 samples) partitions on which we build our models. For details on how we make use of the training data provided in the FigLang2022 shared task, please refer to https://github.com/allenai/dream/blob/main/FigLang2022SharedTask/Process_Data_Train_Dev_split.ipynb.
## Model details
This model is a fine-tuned version of [t5-3b](https://huggingface.co/t5-3b).
It achieves the following results on the evaluation set:
- Loss: 0.7615
- Rouge1: 57.6814
- Rouge2: 37.489
- Rougel: 51.4698
- Rougelsum: 51.4842
- Gen Len: 40.8553
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.9974 | 0.33 | 1000 | 0.8955 | 39.7614 | 27.3853 | 38.1015 | 38.0975 | 19.0 |
| 0.8983 | 0.66 | 2000 | 0.8300 | 41.4031 | 29.3769 | 39.5984 | 39.5886 | 19.0 |
| 0.8764 | 1.0 | 3000 | 0.7855 | 41.2619 | 29.5054 | 39.5859 | 39.5748 | 18.9980 |
| 0.5603 | 1.33 | 4000 | 0.8033 | 41.0015 | 29.8488 | 39.5492 | 39.522 | 18.9980 |
| 0.5619 | 1.66 | 5000 | 0.7869 | 41.3655 | 30.0581 | 39.7462 | 39.7231 | 19.0 |
| 0.5389 | 1.99 | 6000 | 0.7615 | 41.3902 | 30.2049 | 39.8797 | 39.8779 | 19.0 |
| 0.3325 | 2.32 | 7000 | 0.8776 | 41.1737 | 30.3441 | 39.6744 | 39.652 | 18.9954 |
| 0.3509 | 2.65 | 8000 | 0.8501 | 41.2653 | 30.5342 | 39.7315 | 39.7252 | 18.9907 |
| 0.3499 | 2.99 | 9000 | 0.8546 | 41.6401 | 31.0585 | 40.2659 | 40.2487 | 18.9907 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Caddy/UD | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: hmBERT-CoNLL-cp1
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.8690143162744776
- name: Recall
type: recall
value: 0.8887579939414338
- name: F1
type: f1
value: 0.8787752724852317
- name: Accuracy
type: accuracy
value: 0.9810170943499085
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hmBERT-CoNLL-cp1
This model is a fine-tuned version of [dbmdz/bert-base-historic-multilingual-cased](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0710
- Precision: 0.8690
- Recall: 0.8888
- F1: 0.8788
- Accuracy: 0.9810
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.06 | 25 | 0.4115 | 0.3593 | 0.3708 | 0.3649 | 0.9002 |
| No log | 0.11 | 50 | 0.2263 | 0.6360 | 0.6898 | 0.6618 | 0.9456 |
| No log | 0.17 | 75 | 0.1660 | 0.7250 | 0.7582 | 0.7412 | 0.9564 |
| No log | 0.23 | 100 | 0.1520 | 0.7432 | 0.7775 | 0.7600 | 0.9597 |
| No log | 0.28 | 125 | 0.1343 | 0.7683 | 0.8103 | 0.7888 | 0.9645 |
| No log | 0.34 | 150 | 0.1252 | 0.7973 | 0.8230 | 0.8099 | 0.9691 |
| No log | 0.4 | 175 | 0.1021 | 0.8118 | 0.8398 | 0.8255 | 0.9724 |
| No log | 0.46 | 200 | 0.1056 | 0.8153 | 0.8411 | 0.8280 | 0.9727 |
| No log | 0.51 | 225 | 0.0872 | 0.8331 | 0.8612 | 0.8469 | 0.9755 |
| No log | 0.57 | 250 | 0.1055 | 0.8226 | 0.8418 | 0.8321 | 0.9725 |
| No log | 0.63 | 275 | 0.0921 | 0.8605 | 0.8640 | 0.8623 | 0.9767 |
| No log | 0.68 | 300 | 0.0824 | 0.8600 | 0.8787 | 0.8692 | 0.9788 |
| No log | 0.74 | 325 | 0.0834 | 0.8530 | 0.8771 | 0.8649 | 0.9787 |
| No log | 0.8 | 350 | 0.0758 | 0.8646 | 0.8876 | 0.8759 | 0.9800 |
| No log | 0.85 | 375 | 0.0727 | 0.8705 | 0.8866 | 0.8784 | 0.9810 |
| No log | 0.91 | 400 | 0.0734 | 0.8717 | 0.8899 | 0.8807 | 0.9811 |
| No log | 0.97 | 425 | 0.0713 | 0.8683 | 0.8889 | 0.8785 | 0.9810 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Calamarii/calamari | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
---
These are the midjourney styles that are pre-loaded in [Whatchamacallit](https://colab.research.google.com/github/aicrumb/whatchamacallit/blob/main/Whatchamacallit.ipynb)
Using original textual inversion bins that are compatible with most webuis/notebooks that support text inversion loading. They can be easily converted to diffusers-style and in Whatchamacallit there is code to do that already if you need reference.
\- midj-strong: <br>
good at that weird surreal melty almost golden sort of style, looks like clip guided diffusion in my opinion
\- midj-portrait: <br>
a bit more subtle but still very cinematic and changes the image significantly but less so than midj-strong
\- midj-anthro: <br>
was finetuned on some anthropomorphic animals (not traditional furry style, but just animals standing like humans). good on other subjects though.
 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.