modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Chakita/gpt2_mwp | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2022-11-21T17:59:15Z | ---
license: bsd-3-clause
datasets:
- bookcorpus
- wikipedia
- openwebtext
---
# FlexiBERT-Mini model
Pretrained model on the English language using a macked language modeling (MLM) objective. It was found by executing a neural architecture search (NAS) over a design space of ~3.32 billion *flexible* and *heterogeneous* transformer architectures in [this paper](https://arxiv.org/abs/2205.11656). The model is case sensitive.
# Model description
The model consists of diverse attention heads including the traditional self-attention and the discrete cosine transform (DCT). The design space also supports weighted multiplicative attention (WMA), discrete Fourier transform (DFT), and convolution operations in the same transformer model along with different hidden dimensions for each encoder layer.
# How to use
This model should be finetuned on a downstream task. Other models within the FlexiBERT design space can be generated using a model dicsiontary. See this [github repo](https://github.com/JHA-Lab/txf_design-space) for more details. To instantiate a fresh FlexiBERT-Mini model (for pre-trainining using the MLM objective):
```python
from transformers import FlexiBERTConfig, FlexiBERTModel, FlexiBERTForMaskedLM
config = FlexiBERTConfig()
model_dict = {'l': 4, 'o': ['sa', 'sa', 'l', 'l'], 'h': [256, 256, 128, 128], 'n': [2, 2, 4, 4],
'f': [[512, 512, 512], [512, 512, 512], [1024], [1024]], 'p': ['sdp', 'sdp', 'dct', 'dct']}
config.from_model_dict(model_dict)
model = FlexiBERTForMaskedLM(config)
```
# Developer
[Shikhar Tuli](https://github.com/shikhartuli). For any questions, comments or suggestions, please reach me at [[email protected]](mailto:[email protected]).
# Cite this work
Cite our work using the following bitex entry:
```
@article{tuli2022jair,
title={{FlexiBERT}: Are Current Transformer Architectures too Homogeneous and Rigid?},
author={Tuli, Shikhar and Dedhia, Bhishma and Tuli, Shreshth and Jha, Niraj K.},
year={2022},
eprint={2205.11656},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
# License
BSD-3-Clause.
Copyright (c) 2022, Shikhar Tuli and Jha Lab.
All rights reserved.
See License file for more details.
|
Chan/distilroberta-base-finetuned-wikitext2 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-21T18:29:10Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: BERiT_2000_2_layers_40_epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERiT_2000_2_layers_40_epochs
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- label_smoothing_factor: 0.2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 15.0851 | 0.19 | 500 | 8.5468 |
| 7.8971 | 0.39 | 1000 | 7.3376 |
| 7.3108 | 0.58 | 1500 | 7.1632 |
| 7.134 | 0.77 | 2000 | 7.0700 |
| 7.0956 | 0.97 | 2500 | 7.0723 |
| 7.0511 | 1.16 | 3000 | 6.9560 |
| 7.0313 | 1.36 | 3500 | 6.9492 |
| 7.0028 | 1.55 | 4000 | 6.9048 |
| 6.9563 | 1.74 | 4500 | 6.8456 |
| 6.9214 | 1.94 | 5000 | 6.8019 |
| 11.1596 | 2.13 | 5500 | 7.5882 |
| 7.5824 | 2.32 | 6000 | 7.1291 |
| 7.2581 | 2.52 | 6500 | 7.1123 |
| 7.2232 | 2.71 | 7000 | 7.1059 |
| 7.1734 | 2.9 | 7500 | 7.1120 |
| 7.1504 | 3.1 | 8000 | 7.0946 |
| 7.1314 | 3.29 | 8500 | 7.0799 |
| 7.1236 | 3.49 | 9000 | 7.1175 |
| 7.1275 | 3.68 | 9500 | 7.0905 |
| 7.1087 | 3.87 | 10000 | 7.0839 |
| 7.1212 | 4.07 | 10500 | 7.0822 |
| 7.1136 | 4.26 | 11000 | 7.0703 |
| 7.1025 | 4.45 | 11500 | 7.1035 |
| 7.0931 | 4.65 | 12000 | 7.0759 |
| 7.0899 | 4.84 | 12500 | 7.0883 |
| 7.0834 | 5.03 | 13000 | 7.1307 |
| 7.0761 | 5.23 | 13500 | 7.0642 |
| 7.0706 | 5.42 | 14000 | 7.0324 |
| 7.0678 | 5.62 | 14500 | 7.0704 |
| 7.0614 | 5.81 | 15000 | 7.0317 |
| 7.0569 | 6.0 | 15500 | 7.0421 |
| 7.057 | 6.2 | 16000 | 7.0250 |
| 7.0503 | 6.39 | 16500 | 7.0129 |
| 7.0529 | 6.58 | 17000 | 7.0316 |
| 7.0453 | 6.78 | 17500 | 7.0436 |
| 7.0218 | 6.97 | 18000 | 7.0064 |
| 7.0415 | 7.16 | 18500 | 7.0385 |
| 7.0338 | 7.36 | 19000 | 6.9756 |
| 7.0488 | 7.55 | 19500 | 7.0054 |
| 7.0347 | 7.75 | 20000 | 6.9946 |
| 7.0464 | 7.94 | 20500 | 7.0055 |
| 7.017 | 8.13 | 21000 | 7.0158 |
| 7.0159 | 8.33 | 21500 | 7.0052 |
| 7.0223 | 8.52 | 22000 | 6.9925 |
| 6.9989 | 8.71 | 22500 | 7.0307 |
| 7.0218 | 8.91 | 23000 | 6.9767 |
| 6.9998 | 9.1 | 23500 | 7.0096 |
| 7.01 | 9.3 | 24000 | 6.9599 |
| 6.9964 | 9.49 | 24500 | 6.9896 |
| 6.9906 | 9.68 | 25000 | 6.9903 |
| 7.0336 | 9.88 | 25500 | 6.9807 |
| 7.0053 | 10.07 | 26000 | 6.9776 |
| 6.9826 | 10.26 | 26500 | 6.9836 |
| 6.9897 | 10.46 | 27000 | 6.9886 |
| 6.9829 | 10.65 | 27500 | 6.9991 |
| 6.9849 | 10.84 | 28000 | 6.9651 |
| 6.9901 | 11.04 | 28500 | 6.9822 |
| 6.9852 | 11.23 | 29000 | 6.9921 |
| 6.9757 | 11.43 | 29500 | 6.9636 |
| 6.991 | 11.62 | 30000 | 6.9952 |
| 6.9818 | 11.81 | 30500 | 6.9799 |
| 6.9911 | 12.01 | 31000 | 6.9725 |
| 6.9423 | 12.2 | 31500 | 6.9540 |
| 6.9885 | 12.39 | 32000 | 6.9771 |
| 6.9636 | 12.59 | 32500 | 6.9475 |
| 6.9567 | 12.78 | 33000 | 6.9653 |
| 6.9749 | 12.97 | 33500 | 6.9711 |
| 6.9739 | 13.17 | 34000 | 6.9691 |
| 6.9651 | 13.36 | 34500 | 6.9569 |
| 6.9599 | 13.56 | 35000 | 6.9608 |
| 6.957 | 13.75 | 35500 | 6.9531 |
| 6.9539 | 13.94 | 36000 | 6.9704 |
| 6.958 | 14.14 | 36500 | 6.9478 |
| 6.9597 | 14.33 | 37000 | 6.9510 |
| 6.9466 | 14.52 | 37500 | 6.9625 |
| 6.9518 | 14.72 | 38000 | 6.9787 |
| 6.9509 | 14.91 | 38500 | 6.9391 |
| 6.9505 | 15.1 | 39000 | 6.9694 |
| 6.9311 | 15.3 | 39500 | 6.9440 |
| 6.9513 | 15.49 | 40000 | 6.9425 |
| 6.9268 | 15.69 | 40500 | 6.9223 |
| 6.9415 | 15.88 | 41000 | 6.9435 |
| 6.9308 | 16.07 | 41500 | 6.9281 |
| 6.9216 | 16.27 | 42000 | 6.9415 |
| 6.9265 | 16.46 | 42500 | 6.9164 |
| 6.9023 | 16.65 | 43000 | 6.9237 |
| 6.9407 | 16.85 | 43500 | 6.9100 |
| 6.9211 | 17.04 | 44000 | 6.9295 |
| 6.9147 | 17.23 | 44500 | 6.9131 |
| 6.9224 | 17.43 | 45000 | 6.9188 |
| 6.9215 | 17.62 | 45500 | 6.9077 |
| 6.915 | 17.82 | 46000 | 6.9371 |
| 6.906 | 18.01 | 46500 | 6.8932 |
| 6.91 | 18.2 | 47000 | 6.9100 |
| 6.8999 | 18.4 | 47500 | 6.9251 |
| 6.9113 | 18.59 | 48000 | 6.9078 |
| 6.9197 | 18.78 | 48500 | 6.9099 |
| 6.8985 | 18.98 | 49000 | 6.9074 |
| 6.9009 | 19.17 | 49500 | 6.8971 |
| 6.8937 | 19.36 | 50000 | 6.8982 |
| 6.9094 | 19.56 | 50500 | 6.9077 |
| 6.9069 | 19.75 | 51000 | 6.9006 |
| 6.8991 | 19.95 | 51500 | 6.8912 |
| 6.8924 | 20.14 | 52000 | 6.8881 |
| 6.899 | 20.33 | 52500 | 6.8899 |
| 6.9028 | 20.53 | 53000 | 6.8938 |
| 6.8997 | 20.72 | 53500 | 6.8822 |
| 6.8943 | 20.91 | 54000 | 6.9005 |
| 6.8804 | 21.11 | 54500 | 6.9048 |
| 6.8848 | 21.3 | 55000 | 6.9062 |
| 6.9072 | 21.49 | 55500 | 6.9104 |
| 6.8783 | 21.69 | 56000 | 6.9069 |
| 6.8879 | 21.88 | 56500 | 6.8938 |
| 6.8922 | 22.08 | 57000 | 6.8797 |
| 6.8892 | 22.27 | 57500 | 6.9168 |
| 6.8863 | 22.46 | 58000 | 6.8820 |
| 6.8822 | 22.66 | 58500 | 6.9130 |
| 6.8752 | 22.85 | 59000 | 6.8973 |
| 6.8823 | 23.04 | 59500 | 6.8933 |
| 6.8813 | 23.24 | 60000 | 6.8919 |
| 6.8787 | 23.43 | 60500 | 6.8855 |
| 6.8886 | 23.63 | 61000 | 6.8956 |
| 6.8744 | 23.82 | 61500 | 6.9092 |
| 6.8799 | 24.01 | 62000 | 6.8944 |
| 6.879 | 24.21 | 62500 | 6.8850 |
| 6.8797 | 24.4 | 63000 | 6.8782 |
| 6.8724 | 24.59 | 63500 | 6.8691 |
| 6.8803 | 24.79 | 64000 | 6.8965 |
| 6.8899 | 24.98 | 64500 | 6.8986 |
| 6.8873 | 25.17 | 65000 | 6.9034 |
| 6.8777 | 25.37 | 65500 | 6.8658 |
| 6.8784 | 25.56 | 66000 | 6.8803 |
| 6.8791 | 25.76 | 66500 | 6.8727 |
| 6.8736 | 25.95 | 67000 | 6.8832 |
| 6.8865 | 26.14 | 67500 | 6.8811 |
| 6.8668 | 26.34 | 68000 | 6.8817 |
| 6.8709 | 26.53 | 68500 | 6.8945 |
| 6.8755 | 26.72 | 69000 | 6.8777 |
| 6.8635 | 26.92 | 69500 | 6.8747 |
| 6.8752 | 27.11 | 70000 | 6.8875 |
| 6.8729 | 27.3 | 70500 | 6.8696 |
| 6.8728 | 27.5 | 71000 | 6.8659 |
| 6.8692 | 27.69 | 71500 | 6.8856 |
| 6.868 | 27.89 | 72000 | 6.8689 |
| 6.8668 | 28.08 | 72500 | 6.8877 |
| 6.8576 | 28.27 | 73000 | 6.8783 |
| 6.8633 | 28.47 | 73500 | 6.8828 |
| 6.8737 | 28.66 | 74000 | 6.8717 |
| 6.8702 | 28.85 | 74500 | 6.8485 |
| 6.8785 | 29.05 | 75000 | 6.8771 |
| 6.8818 | 29.24 | 75500 | 6.8815 |
| 6.8647 | 29.43 | 76000 | 6.8877 |
| 6.8574 | 29.63 | 76500 | 6.8920 |
| 6.8474 | 29.82 | 77000 | 6.8936 |
| 6.8558 | 30.02 | 77500 | 6.8768 |
| 6.8645 | 30.21 | 78000 | 6.8921 |
| 6.8786 | 30.4 | 78500 | 6.8604 |
| 6.8693 | 30.6 | 79000 | 6.8603 |
| 6.855 | 30.79 | 79500 | 6.8559 |
| 6.8429 | 30.98 | 80000 | 6.8746 |
| 6.8688 | 31.18 | 80500 | 6.8774 |
| 6.8735 | 31.37 | 81000 | 6.8643 |
| 6.8541 | 31.56 | 81500 | 6.8767 |
| 6.8695 | 31.76 | 82000 | 6.8804 |
| 6.8607 | 31.95 | 82500 | 6.8674 |
| 6.8538 | 32.15 | 83000 | 6.8572 |
| 6.8472 | 32.34 | 83500 | 6.8683 |
| 6.8763 | 32.53 | 84000 | 6.8758 |
| 6.8405 | 32.73 | 84500 | 6.8764 |
| 6.8658 | 32.92 | 85000 | 6.8614 |
| 6.8834 | 33.11 | 85500 | 6.8641 |
| 6.8554 | 33.31 | 86000 | 6.8787 |
| 6.8738 | 33.5 | 86500 | 6.8747 |
| 6.848 | 33.69 | 87000 | 6.8699 |
| 6.8621 | 33.89 | 87500 | 6.8654 |
| 6.8543 | 34.08 | 88000 | 6.8639 |
| 6.8606 | 34.28 | 88500 | 6.8852 |
| 6.8666 | 34.47 | 89000 | 6.8840 |
| 6.8717 | 34.66 | 89500 | 6.8773 |
| 6.854 | 34.86 | 90000 | 6.8671 |
| 6.8526 | 35.05 | 90500 | 6.8762 |
| 6.8592 | 35.24 | 91000 | 6.8644 |
| 6.8641 | 35.44 | 91500 | 6.8599 |
| 6.8655 | 35.63 | 92000 | 6.8622 |
| 6.8557 | 35.82 | 92500 | 6.8671 |
| 6.8546 | 36.02 | 93000 | 6.8573 |
| 6.853 | 36.21 | 93500 | 6.8542 |
| 6.8597 | 36.41 | 94000 | 6.8518 |
| 6.8576 | 36.6 | 94500 | 6.8700 |
| 6.8549 | 36.79 | 95000 | 6.8628 |
| 6.8576 | 36.99 | 95500 | 6.8695 |
| 6.8505 | 37.18 | 96000 | 6.8870 |
| 6.8564 | 37.37 | 96500 | 6.8898 |
| 6.8627 | 37.57 | 97000 | 6.8619 |
| 6.8502 | 37.76 | 97500 | 6.8696 |
| 6.8548 | 37.96 | 98000 | 6.8663 |
| 6.8512 | 38.15 | 98500 | 6.8683 |
| 6.8484 | 38.34 | 99000 | 6.8605 |
| 6.8581 | 38.54 | 99500 | 6.8749 |
| 6.8525 | 38.73 | 100000 | 6.8849 |
| 6.8375 | 38.92 | 100500 | 6.8712 |
| 6.8423 | 39.12 | 101000 | 6.8905 |
| 6.8559 | 39.31 | 101500 | 6.8574 |
| 6.8441 | 39.5 | 102000 | 6.8722 |
| 6.8467 | 39.7 | 102500 | 6.8550 |
| 6.8389 | 39.89 | 103000 | 6.8375 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Chandanbhat/distilbert-base-uncased-finetuned-cola | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-21T18:56:14Z | ---
license: mit
tags:
- generated_from_trainer
- nlu
- intent-classification
- text-classification
metrics:
- accuracy
- f1
model-index:
- name: xlm-r-base-amazon-massive-intent-label_smoothing
results:
- task:
name: intent-classification
type: intent-classification
dataset:
name: MASSIVE
type: AmazonScience/massive
split: test
metrics:
- name: F1
type: f1
value: 0.8879
datasets:
- AmazonScience/massive
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-r-base-amazon-massive-intent-label_smoothing
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MASSIVE1.1](https://huggingface.co/datasets/AmazonScience/massive) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5148
- Accuracy: 0.8879
- F1: 0.8879
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- label_smoothing_factor: 0.4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 3.3945 | 1.0 | 720 | 2.7175 | 0.7900 | 0.7900 |
| 2.7629 | 2.0 | 1440 | 2.5660 | 0.8549 | 0.8549 |
| 2.5143 | 3.0 | 2160 | 2.5389 | 0.8711 | 0.8711 |
| 2.4678 | 4.0 | 2880 | 2.5172 | 0.8883 | 0.8883 |
| 2.4187 | 5.0 | 3600 | 2.5148 | 0.8879 | 0.8879 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2 |
CharlieChen/feedback-bigbird | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-vanilla-mtop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-vanilla-mtop
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1581
- Exact Match: 0.6331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|
| 1.5981 | 6.65 | 200 | 0.1598 | 0.4940 |
| 0.1335 | 13.33 | 400 | 0.1155 | 0.5884 |
| 0.074 | 19.98 | 600 | 0.1046 | 0.6094 |
| 0.0497 | 26.65 | 800 | 0.1065 | 0.6139 |
| 0.0363 | 33.33 | 1000 | 0.1134 | 0.6255 |
| 0.0278 | 39.98 | 1200 | 0.1177 | 0.6313 |
| 0.022 | 46.65 | 1400 | 0.1264 | 0.6255 |
| 0.0183 | 53.33 | 1600 | 0.1260 | 0.6304 |
| 0.0151 | 59.98 | 1800 | 0.1312 | 0.6300 |
| 0.0124 | 66.65 | 2000 | 0.1421 | 0.6277 |
| 0.0111 | 73.33 | 2200 | 0.1405 | 0.6277 |
| 0.0092 | 79.98 | 2400 | 0.1466 | 0.6331 |
| 0.008 | 86.65 | 2600 | 0.1522 | 0.6340 |
| 0.007 | 93.33 | 2800 | 0.1590 | 0.6295 |
| 0.0064 | 99.98 | 3000 | 0.1581 | 0.6331 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Charlotte77/model_test | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | Third model is Nightmare Wet Worms. Prompt being "NghtmrWrmFrk". It's more based on my models that are full of tentacles, worms, maggots, wet looking, drippy....etc. This model isn't perfect and alot of words don't seem to matter as much, but you can still get some amazing results if your into this type of look. Heck, just type a bunch of random words and you get weird images! Keep the CFG low, steps at any amount though. Samples can be anything.

|
ChaseBread/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2022-11-21T19:08:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-vanilla-mtop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-vanilla-mtop
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2080
- Exact Match: 0.6394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|
| 1.0516 | 6.65 | 200 | 0.1173 | 0.5875 |
| 0.0541 | 13.33 | 400 | 0.1130 | 0.6331 |
| 0.0468 | 19.98 | 600 | 0.1290 | 0.6036 |
| 0.0241 | 26.65 | 800 | 0.1306 | 0.6273 |
| 0.0125 | 33.33 | 1000 | 0.1425 | 0.6291 |
| 0.0077 | 39.98 | 1200 | 0.1518 | 0.6345 |
| 0.0054 | 46.65 | 1400 | 0.1643 | 0.6362 |
| 0.004 | 53.33 | 1600 | 0.1718 | 0.6362 |
| 0.0033 | 59.98 | 1800 | 0.1803 | 0.6336 |
| 0.0026 | 66.65 | 2000 | 0.1808 | 0.6394 |
| 0.0021 | 73.33 | 2200 | 0.1915 | 0.6371 |
| 0.0017 | 79.98 | 2400 | 0.1919 | 0.6403 |
| 0.0013 | 86.65 | 2600 | 0.2024 | 0.6358 |
| 0.0011 | 93.33 | 2800 | 0.2049 | 0.6353 |
| 0.0008 | 99.98 | 3000 | 0.2080 | 0.6394 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Cheapestmedsshop/Buymodafinilus | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_keras_callback
model-index:
- name: GeoBERT
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# GeoBERT_Analyzer
GeoBERT_Analyzer is a Text Classification model that was fine-tuned from GeoBERT on the Geoscientific Corpus dataset.
The model was trained on the Labeled Geoscientific & Non-Geosceintific Corpus dataset (21416 x 2 sentences).
## Intended uses
The train aims to make the Language Model have the ability to distinguish between Geoscience and Non – Geoscience (General) corpus
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 14000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Framework versions
- Transformers 4.22.1
- TensorFlow 2.10.0
- Datasets 2.4.0
- Tokenizers 0.12.1
## Model performances (metric: seqeval)
entity|precision|recall|f1
-|-|-|-
General |0.9976|0.9980|0.9978
Geoscience|0.9980|0.9984|0.9982
## How to use GeoBERT with HuggingFace
##### Load GeoBERT and its sub-word tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("botryan96/GeoBERT_analyzer")
model = AutoModelForTokenClassification.from_pretrained("botryan96/GeoBERT_analyzer")
#Define the pipeline
from transformers import pipeline
anlyze_machine=pipeline('text-classification',model = model_checkpoint2)
#Define the sentences
sentences = ['the average iron and sulfate concentrations were calculated to be 19 . 6 5 . 2 and 426 182 mg / l , respectively .',
'She first gained media attention as a friend and stylist of Paris Hilton']
#Deploy the machine
anlyze_machine(sentences)
``` |
Cheatham/xlm-roberta-base-finetuned | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20 | null | ---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Cheatham/xlm-roberta-large-finetuned-d1 | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20 | 2022-11-21T19:12:40Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### ataturkai Dreambooth model trained by thothai with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
|
Cheatham/xlm-roberta-large-finetuned-r01 | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 23 | 2022-11-21T19:21:21Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### Open Potion Bottle v2 Dreambooth model trained by [piEsposito](https://twitter.com/piesposi_to) with open weights, configs and prompts (as it should be)
- Concept: `potionbottle`
You can run this concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
## Usage examples with `potionbottle`
- Prompt: fantasy dragon inside a potionbottle, perfectly ornated, intricate details, 3d render vray, uhd, beautiful, trending on artstation
- CFG Scale: 10
- Scheduler: `diffusers.EulerAncestralDiscreteScheduler`
- Steps: 30
<img src="https://huggingface.co/piEsposito/openpotionbottle-v2/resolve/main/concept_images/pottionbottle_1.png" width=512/>
- Prompt: potionbottle, perfectly ornated, intricate details, 3d render vray, uhd, beautiful, trending on artstation
- CFG Scale: 10
- Scheduler: `diffusers.EulerAncestralDiscreteScheduler`
- Steps: 30
<img src="https://huggingface.co/piEsposito/openpotionbottle-v2/resolve/main/concept_images/potionbottle_2.png" width=512/>
- Prompt: green potionbottle, perfectly ornated, intricate details, 3d render vray, uhd, beautiful, trending on artstation
- CFG Scale: 10
- Scheduler: `diffusers.EulerAncestralDiscreteScheduler`
- Steps: 30
<img src="https://huggingface.co/piEsposito/openpotionbottle-v2/resolve/main/concept_images/potionbottle_3.png" width=512/>
- Prompt: spiral galaxy inside a potionbottle, perfectly ornated, intricate details, 3d render vray, uhd, beautiful, trending on artstation
- CFG Scale: 10
- Scheduler: `diffusers.EulerAncestralDiscreteScheduler`
- Steps: 30
<img src="https://huggingface.co/piEsposito/openpotionbottle-v2/resolve/main/concept_images/potionbottle_4.png" width=512/>
- Prompt: lightning storm inside a potionbottle, perfectly ornated, intricate details, 3d render vray, uhd, beautiful, trending on artstation
- CFG Scale: 10
- Scheduler: `diffusers.EulerAncestralDiscreteScheduler`
- Steps: 30
<img src="https://huggingface.co/piEsposito/openpotionbottle-v2/resolve/main/concept_images/pottionbottle_5.png" width=512/>
- Prompt: pomeranian as a potionbottle, perfectly ornated, intricate details, 3d render vray, uhd, beautiful, trending on artstation
- CFG Scale: 10
- Scheduler: `diffusers.EulerAncestralDiscreteScheduler`
- Steps: 30
<img src="https://huggingface.co/piEsposito/openpotionbottle-v2/resolve/main/concept_images/potionbottle_6.png" width=512/>
- Prompt: milkshake as potionbottle, perfectly ornated, intricate details, 3d render vray, beautiful, trending on artstation
- CFG Scale: 10
- Scheduler: `diffusers.EulerAncestralDiscreteScheduler`
- Steps: 30
<img src="https://huggingface.co/piEsposito/openpotionbottle-v2/resolve/main/concept_images/pottionbottle_7.png" width=512/>
- Prompt: a square potionbottle full of fire. Art by smoose2. Caustic reflections, shadows
- CFG Scale: 10
- Scheduler: `diffusers.EulerAncestralDiscreteScheduler`
- Steps: 30
<img src="https://huggingface.co/piEsposito/openpotionbottle-v2/resolve/main/concept_images/pottionbottle_8.png" width=512/>
#### By https://twitter.com/piesposi_to
|
Cheatham/xlm-roberta-large-finetuned3 | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 22 | 2022-11-21T19:21:32Z | ---
language: "en"
thumbnail:
tags:
- speechbrain
- embeddings
- Speaker
- Verification
- Identification
- pytorch
- ECAPA-TDNN
license: "apache-2.0"
datasets:
- voxceleb
metrics:
- EER
- Accuracy
inference: true
widget:
- example_title: VoxCeleb Speaker id10003
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb1_00003.wav
- example_title: VoxCeleb Speaker id10004
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb_00004.wav
---
# Speaker Identification with ECAPA-TDNN embeddings on Voxceleb
This repository provides a pretrained ECAPA-TDNN model using SpeechBrain. The system can be used to extract speaker embeddings as well. Since we can't find any resource that has SpeechBrain or HuggingFace compatible checkpoints that has only been trained on VoxCeleb2 development data, so we decide to pre-train an ECAPA-TDNN system from scratch.
# Pipeline description
This system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss.
We use FBank (16kHz, 25ms frame length, 10ms hop length, 80 filter-bank channels) as the input features. It was trained using initial learning rate of 0.001 and batch size of 512 with cyclical learning rate policy (CLR) for 20 epochs on 4 A100 GPUs. We employ additive noises and reverberation from [MUSAN](http://www.openslr.org/17/) and [RIR](http://www.openslr.org/28/) datasets to enrich the supervised information. The pre-training progress takes approximately ten days for the ECAPA-TDNN model.
# Performance
**VoxCeleb1-O** is the original verification test set from VoxCeleb1 consisting of 40 speakers. All speakers with names starting with "E" are reserved for testing. **VoxCeleb1-E** uses the entire VoxCeleb1 dataset, covering 1251 speakers. **VoxCeleb1-H** is a hard version of evaluation set consisting of 552536 pairs with 1190 speakers with the same nationality and gender. There are 18 nationality-gender combinations each with at least 5 individuals.
| Splits | Backend | S-norm | EER(%) | minDCF(0.01) |
|:-------------:|:--------------:|:--------------:|:--------------:|:--------------:|
| VoxCeleb1-O | cosine | no | 1.29 | 0.13 |
| VoxCeleb1-O | cosine | yes | 1.19 | 0.11 |
| VoxCeleb1-E | cosine | no | 1.42 | 0.16 |
| VoxCeleb1-E | cosine | yes | 1.31 | 0.14 |
| VoxCeleb1-H | cosine | no | 2.66 | 0.26 |
| VoxCeleb1-H | cosine | yes | 2.48 | 0.23 |
- VoxCeleb1-O: includes 37611 test pairs with 40 speakers.
- VoxCeleb1-E: includes 579818 test pairs with 1251 speakers.
- VoxCeleb1-H: includes 550894 test pairs with 1190 speakers.
# Compute the speaker embeddings
The system is trained with recordings sampled at 16kHz (single channel).
```python
import torch
import torchaudio
from speechbrain.pretrained.interfaces import Pretrained
from speechbrain.pretrained import EncoderClassifier
class Encoder(Pretrained):
MODULES_NEEDED = [
"compute_features",
"mean_var_norm",
"embedding_model"
]
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def encode_batch(self, wavs, wav_lens=None, normalize=False):
# Manage single waveforms in input
if len(wavs.shape) == 1:
wavs = wavs.unsqueeze(0)
# Assign full length if wav_lens is not assigned
if wav_lens is None:
wav_lens = torch.ones(wavs.shape[0], device=self.device)
# Storing waveform in the specified device
wavs, wav_lens = wavs.to(self.device), wav_lens.to(self.device)
wavs = wavs.float()
# Computing features and embeddings
feats = self.mods.compute_features(wavs)
feats = self.mods.mean_var_norm(feats, wav_lens)
embeddings = self.mods.embedding_model(feats, wav_lens)
if normalize:
embeddings = self.hparams.mean_var_norm_emb(
embeddings,
torch.ones(embeddings.shape[0], device=self.device)
)
return embeddings
classifier = Encoder.from_hparams(
source="yangwang825/ecapa-tdnn-vox2"
)
signal, fs = torchaudio.load('spk1_snt1.wav')
embeddings = classifier.encode_batch(signal)
>>> torch.Size([1, 1, 192])
```
We will release our training results (models, logs, etc) shortly.
# References
1. Ravanelli et al., SpeechBrain: A General-Purpose Speech Toolkit, 2021
2. Desplanques et al., ECAPA-TDNN: Emphasized Channel Attention, Propagation and Aggregation in TDNN Based Speaker Verification, 2020 |
Check/vaw2tmp | [
"tensorboard"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-21T19:24:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus100
model-index:
- name: t5-small-finetuned-ta-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-ta-to-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus100 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6087
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.826 | 1.0 | 11351 | 3.6087 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
CheonggyeMountain-Sherpa/kogpt-trinity-poem | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | 2022-11-21T19:28:21Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: "food_crit "
---
### Jak's Creepy Critter Pack for Stable Diffusion
Trained using TheLastBen Dreambooth colab notebook, using 95 training images, 5000 training steps.
Use Prompt: "food_crit" in the beginning of your prompt followed by a food. No major prompt-crafting needed.
Thanks to /u/Jak_TheAI_Artist for supplying training images!
Sample pictures of this concept:
prompt: "food_crit, spaghetti and meatballs"

prompt: "food_crit, snowcone" 
prompt: "food_crit, cola cola, vibrant colors"
Steps: 27, Sampler: Euler a, CFG scale: 6, Seed: 1195328763
prompt: "shiny ceramic 3d painting, (mens's shoe creature) gum stuck to sole, high detail render, vibrant, cinematic lighting"
Negative prompt: painting, photoshop, illustration, blurry, dull, drawing
Steps: 40, Sampler: Euler a, CFG scale: 10, Seed: 1018346393
Prompt: "melting trippy zombie muscle car, smoking, with big eyes, hyperrealistic, intricate detail, high detail render, vibrant, cinematic lighting, shiny, ceramic, reflections"
Negative prompt: "painting, photoshop, illustration, blurry, dull"
Steps: 40, Sampler: Euler a, CFG scale: 10, Seed: 3713218290, Size: 960x512, Model hash: d9aa872b
|
Chertilasus/main | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-21T19:28:59Z | ---
language:
- te
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- Chai_Bisket_Stories_16-08-2021_14-17
metrics:
- wer
model-index:
- name: Whisper Small Telugu - Naga Budigam
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Chai_Bisket_Stories_16-08-2021_14-17
type: Chai_Bisket_Stories_16-08-2021_14-17
config: None
split: None
args: 'config: te, split: test'
metrics:
- name: Wer
type: wer
value: 77.48711850971065
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Telugu - Naga Budigam
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Chai_Bisket_Stories_16-08-2021_14-17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7063
- Wer: 77.4871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2933 | 2.62 | 500 | 0.3849 | 86.6429 |
| 0.0692 | 5.24 | 1000 | 0.3943 | 82.7190 |
| 0.0251 | 7.85 | 1500 | 0.4720 | 82.4415 |
| 0.0098 | 10.47 | 2000 | 0.5359 | 81.6092 |
| 0.0061 | 13.09 | 2500 | 0.5868 | 75.9413 |
| 0.0025 | 15.71 | 3000 | 0.6235 | 76.6944 |
| 0.0009 | 18.32 | 3500 | 0.6634 | 78.3987 |
| 0.0005 | 20.94 | 4000 | 0.6776 | 77.1700 |
| 0.0002 | 23.56 | 4500 | 0.6995 | 78.2798 |
| 0.0001 | 26.18 | 5000 | 0.7063 | 77.4871 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Chester/traffic-rec | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-21T19:30:17Z | ---
language: en
datasets:
- Dizex/InstaFoodSet
widget:
- text: "Today's meal: Fresh olive poké bowl topped with chia seeds. Very delicious!"
example_title: "Food example 1"
- text: "Tartufo Pasta with garlic flavoured butter and olive oil, egg yolk, parmigiano and pasta water."
example_title: "Food example 2"
tags:
- Instagram
- NER
- Named Entity Recognition
- Food Entity Extraction
- Social Media
- Informal text
- RoBERTa
license: mit
---
# InstaFoodRoBERTa-NER
## Model description
**InstaFoodRoBERTa-NER** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** of Food entities on informal text (social media like). It has been trained to recognize a single entity: food (FOOD).
Specifically, this model is a *roberta-base* model that was fine-tuned on a dataset consisting of 400 English Instagram posts related to food. The [dataset](https://huggingface.co/datasets/Dizex/InstaFoodSet) is open source.
## Intended uses
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Dizex/InstaFoodRoBERTa-NER")
model = AutoModelForTokenClassification.from_pretrained("Dizex/InstaFoodRoBERTa-NER")
pipe = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Today's meal: Fresh olive poké bowl topped with chia seeds. Very delicious!"
ner_entity_results = pipe(example, aggregation_strategy="simple")
print(ner_entity_results)
```
To get the extracted food entities as strings you can use the following code:
```python
def convert_entities_to_list(text, entities: list[dict]) -> list[str]:
ents = []
for ent in entities:
e = {"start": ent["start"], "end": ent["end"], "label": ent["entity_group"]}
if ents and -1 <= ent["start"] - ents[-1]["end"] <= 1 and ents[-1]["label"] == e["label"]:
ents[-1]["end"] = e["end"]
continue
ents.append(e)
return [text[e["start"]:e["end"]] for e in ents]
print(convert_entities_to_list(example, ner_entity_results))
```
This will result in the following output:
```python
['olive poké bowl', 'chia seeds']
```
## Performance on [InstaFoodSet](https://huggingface.co/datasets/Dizex/InstaFoodSet)
metric|val
-|-
f1 |0.91
precision |0.89
recall |0.93
|
Chikita1/www_stash_stock | [
"license:bsd-3-clause-clear"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-21T21:12:34Z | ---
language:
- en
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/ai-characters/4elements-diffusion/resolve/main/gandr-collage.jpg"
tags:
- stable-diffusion
- text-to-image
- image-to-image
---
# 4elements-diffusion
##### A StableDiffusion All-In-One Legend of Korra style + Korra character Dreambooth model created by AI-Characters
#### For what tokens to use in your prompts to employ the desired effects scroll down to the following section of this page: "Tokens to use to prompt the artstyle as well as Korra's different outfits"

**Feel free to donate to my [KoFi](https://ko-fi.com/aicharacters)** to help me fund renting GPU's for further model creation and experimentation!
Follow me on [Twitter](https://twitter.com/ai_characters) and [Instagram](https://www.instagram.com/ai_characters/) for AI art posts and model updates!
## Quick Feature Overview
- Create anyone and anything in the LoK artstyle!
- Create Korra in any artstyle!
- Mix and match all of Korra's outfits however you want to!
- Give anyone Korra's outfits!
- Give Korra any outfits!
*This model is much trickier to use than other models, but in return it is very flexible and has high likeness!* **I thus highly recommend checking out the "How to correctly use this model" section of this page!**
--- This model is not yet final! I will keep working on it and trying to improve it! I also welcome anyone to use my uploaded dataset (see at the bottom of this page) to create a better version! ---
## IMPORTANT INFORMATION BEFORE YOU USE THIS MODEL
I highly recommend using img2img when using this model, either by converting photos into the Legend of Korra artstyle or by resizing your initial 512x512 txt2img Legend of Korra style generations up to 1024x1024 or higher resolutions. **Your initial 512x512 txt2img generations using the Legend of Korra artstyle WILL ALWAYS look like crap** if you generate shots of characters that are more zoomed out than just a closeup (e.g. half-body or full-shot). **Resizing the initial 512x512 generations to 1024x1024 or bigger** (full-shots will likely need 1536x1536 to look good) using img2img **will drastically improve your experience using this model!**
The model is also infected, e.g. photos output with this model WILL look different from those output in the vanilla SD model! So I recommend generating people in the vanilla SD model using txt2img first and then sending them to img2img and switching the model to this one and then applying the style! This way your result is more true to vanilla SD but just with the style applied!
**For more useful information on how to correctly use this model, see the "How to correctly use this model" section of this page!**
## Introduction
Welcome to my first ever published StableDiffusion model and the first public model **trained on the Legend of Korra artstyle**! But not just the artstyle: **I have trained this model on Korra, including *all* of her outfits, as well!** In total this model was trained using a manually captioned dataset of 1142 images: screencaps from the show, fanart, and cosplay photos.
I spent every day the last 4 weeks working on this project and spent hundreds of euros renting many many many GPU hours on VastAI to experiment with various parameters. I have created more than 50 ckpt's since then and learned a ton since then and got a ton of insight.
## Recommended samplers, steps, CFG values and denoising strength settings (for img2img)
- Euler a at 20 steps for quick results
- LMS at 100-150 steps for higher quality results that also follow your prompt more closely
- DPM++ 2M Karras at 20 steps for an alternative to EulerA
- CFG value from 7 to 4 (4 can look better in terms of image quality because it will have less of the overtraining effect, but it can also look less detailed)
- denoising strength of 0.4-0.6 for general img2img, and up to around 0.8 for more harcore cases where the style needs more denoising to be correctly applied (thoughthat will change the image of course, consider also to just do multiple runs through 0.5-0.6)
## How to correctly use this model (it's not as simple as the other models floating around the web currently!)
This model is not as easy to use as some of the other models you might be used to. For good results prompt engineering and img2img resizing is required. I highly recommend tinkering with the prompt weights, word order in the prompt, samplers, cfg and step values, etc! The results can be well worth it!
**My recommendation is to generate a photo in the vanilla SD model, send it to img2img, then switching the model to this one, and using the img2img function to transfer the style to the Legend of Korra style!** Also consider inpainting (though this model isn't trained on the new base inpainting model yet)! **I also recommend to keep prompts simple and the "zoom" closer to the character for better results! Though sometimes a highly complex prompt can also result in much better generations**, e.g. "Emma Watson, tlok artstyle" will almost always produce much worse results than a more complex prompt!
- **The most important bit first: SD doesn't play well with the artstyle at the standard 512x512. So your initial 512x512 generations in the artstyle will need to be resized to 1024x1024 for half-body shots and 1536x1536 for full-body shots in order to look good.** Closeups will look okay in 512x512 but I still recommend upscaling to 1024x1024.
An example:
Initial 512x512 generation

Upscaled to 1024x1024 (with an inpainted face)

Upscaled to 1526x1536 (with an inpainted face)

- **I highly recommend using the following negative prompt for *all* generations** (no matter what style, aka it massively improves the tlok artstyle generations as well!):
**blur, vignette, instagram**
This will drastically reduce the "overtrained effect" of the generations, e.g. too bright, vignetted and fried images. I have no idea why that works. It just does.
Examples:
Without the negative prompt:

With the negative prompt:

- Only for photos: You can add "photo, tlok artstyle" to the negative prompt for a further reduction in the "overtrained effect"! Doesn't always work, but sometimes does! Having photo in both the positive and negative prompt may sound nonsensical, but it works!
- **Also consider going from a CFG value of 7 down to a CFG value of 4.** This will make the image somewhat less detailed but it will also look much better in certain cases!
Example:
CFG value of 7:

CFG value of 4:

- **Use "cosplay photo" and not just "photo" in your positive prompt as just "photo" is sometimes not strong enough to force through the photo style, while "cosplay photo" almost always is because the captions were trained on that!**
Example:
Just "photo"

"cosplay photo"

- The model was trained using captions such as "cosplay photo", "full-shot", "half-body", "closeup", "facial closeup", among others. **So in case you are trying to force a different style but the tlok artstyle keeps popping up, try changing "full-shot" to "full-body" for example!**
- **Alternatively, add "tlok artstyle" to the negative prompt if you find that the Legend of Korra style is influencing your prompt too strongly!**
Example:
"19th century oil painting"

"19th century oil painting (negative prompt "tlok artstyle")"

- **Sometimes the photo generations of Korra will be too white, add "white skin" to the negative prompt in that case!**
## Example generations using this model!
empire state building tlok artstyle (using img2img)

woman with long blue hair wearing a traditional Japanese kimono during golden hour lighting tlok artstyle (resized with img2img + face inpainted)

young woman with red hair wearing modern casual white tshirt and blue jeans standing in front of the Brandenburg Gate tlok artstyle (resized with img2img + face inpainted)

written letter tlok artstyle (resized using img2img)

Korra wearing business suit stada hairstyle tlok artstyle (resized with img2img + face inpainted)

full-shot Korra wearing astronaut outfit stada hairstyle tlok artstyle (resized using img2img)

Korra wearing defa outfit stada hairstyle as a cute pixar character (resized using img2img)

half-body Korra wearing Kimono taio hairstyle figurine (resized using img2img)

dog tlok artstyle

mountain river valley tlok artstyle

Korra wearing bikini shoa hairstyle realistic detailed digital art by Greg Rutkowski

Korra wearing rain jacket and jeans stada hairstyle cosplay photograph

car on a road city street background tlok artstyle (resized using img2img)

Emma Watson (wearing defa outfit:1.3) cosplay photograph (resized using img2img)
%20cosplay%20photograph.png)
Zendaya standing in a forest wearing runa outfit tlok artstyle (resized using img2img + face inpainted)

## Tokens to use to prompt the artstyle as well as Korra's different outfits
**You can give Korra's outfits and hairstyles also to other people thanks to the token method! You can also mix and match outfits and hairstyles however you want to**, though results may at times be worse than if you just pair the correct hairstyle to the correct outfit (aka as it was in the show)!
Legend of Korra artstyle:
- tlok artstyle
Korra's hairstyles:
- stada hairstyle (Default ponytail hair)
- oped hairstyle (Opened hair)
- loes hairstyle (Loose hair)
- shoa hairstyle (Season4 short hair)
- taio hairstyle (Traditional formal hair)
- foha hairstyle (Season4 formal hair)
- okch hairstyle (young child Korra hairstyle)
Korra's outfits:
"wearing X outfit"
**(the second words are the hairstyles, e.g. with "runa shoa" "runa" is the outfit and "shoa" the hairstyle; prompting the corresponding hairstyle alongside the outfit will give you better likeness, but you can also mix and match different hairstyles and outfits together as you see fit at the cost of likeness, though some outfits and hairstyles work better than others in this regard)**
- runa shoa (earth kingdom runaway)
- saco stada (default parka)
- aino stada (airnomad (makes her look like a child for some reason))
- fife stada (fireferrets probending uniform)
- eqli stada (equalist disguise)
- boez oped (season2 parka)
- defa stada (default outfit)
- alte stada (season2 outfit)
- asai shoa (Asami's jacket (doesn't work so well))
- taso stada (Tarrlok's taskforce)
- dava oped (dark avatar/season 3 finale)
- seri foha (series finale gown)
- fose shoa (season4 outfit)
- proe stada (probending training attire)
- tuwa shoa (turfwars finale gown from the comics (doesn't work so well))
- cidi stada (civilian disguise)
- epgo taio (traditional dress)
- bafo loes (bath/sleeping robe)
- ektu shoa (earth kingdom tunic/hoodie)
- pama loes (pajamas)
- exci stada (firebending exercise (doesn't work so well))
- as chie, wearing yowi (child korra, winter outfit from the comics)
- as chie, wearing suou (child kora, summer outfit)
## Current shortcomings of the model
- the model is infected due to no regularization. This means better likeness but also means that you are better off using the original vanilla SD model for txt2img photo generations and then send them to img2img and switch the model over to this one for style transfer!
- the model may struggle at times with more complex prompts
- location tagging is very rudimentary for now (exterior, day, arctic)
- Landscapes could look better
- No tagging of unique locations, e.g. Republic City
- Korra is the only trained character for now
- a few of the outfits don't work that well because of low amount of training images or low resolution images. Generally some outfits, people, things, styles and prompts will work better than others
- likeness was better for certain prompts in my older models
## Outlook into the future
- Ideally I will be able to expand upon this model in the future by adding all the other characters from the show and maybe even ATLA characters! However, right now I am uncertain if that is possible, as the model is already heavily trained.
- Generally I want to improve this models likeness and flexibility
- Training this model on the new base inpainting model
- I seek to produce more models in the future such as models for Ahsoka, Aloy, Owl House, Ghibli, Sadie Sink, She-Ra, various online artists... but that will take time (and money)
## How I created this model and the underlying dataset (+ dataset download link!)
At first I wanted to create only a small Korra model with only her default outfit. In the first days I was experimenting with the standard class and token Dreambooth method using JoePennas repo. For that I manually downloaded 900 screenshots from the show of Korra in her default outfit from fancaps.net. I then manually cropped and resized those images. As I ran into walls I stopped trying to create this model and restarted trying to create a general style model using native finetuning instead. This time however I used the 40€ paid version of "Bulk Image Downloader" to automatically download around 30000 screencaps of the show from fancaps.net. I then used AntiDupl.NET to delete around half of the images which were found out by the program to be a duplicate. I then used ChaiNNer and IrfanView to bulk crop and resize the rest of the dataset to 512x512. I also downloaded around 200 high-quality fanarts and cosplay photos depicting Korra in her various outfits and some non-show outfits and used Irfanview to automatically resize them to 512x512 without cropping by adding black borders to the image (those do not show up in the final model output, luckily).
I spent a lot of money on GPU renting for the native finetuning but results were worse than my Dreambooth experiments so I went back to Dreambooth and used a small fraction of the finetuning dataset to create a style model. I learned a lot this time around and improved my model results but still results were not to my liking.
That is when I found out about the caption method in JoePennas repo. So I went ahead and spent an entire weekend, 12 hours each day, manually captioning around 1000 images. I used around 300 images from the former finetuning dataset for the style, 600 from the former 900 manually cropped and resized screencaps of Korra in her default outfit, then around 200 fanarts and cosplay photos and some additional screencaps and images of Korra in all her other outfits, to create my final dataset.
I used "Bulk File Rename" for Windows 10 to bulk rename files aka add captions.
**[The captioned dataset can be found here!](https://huggingface.co/datasets/ai-characters/4elements-diffusion-captioned-dataset)**
**[The 14000 show screencaps can be found for download here!](https://www.dropbox.com/s/406u0tv9xuttgku/14284%20images%2C%20512x512%2C%20automatically%20cropped%2C%20downscaled%20from%201080x1080.7z?dl=1)**
I encourage everyone to try and do it better than me and create your own Legend of Korra model!
Ultimately I spent the past two weeks experimenting with various different captions and training settings to reach my final model.
My final model uses these training settings:
- Repo: JoePenna's with captions (no class or regularization and only a fake token that will not be used during training)
- Learning rate: 3e-6 (for 80 repeats) and 2e-6 (for 35 repeats)
- Repeats/Steps: See above (1 repeat = one run through the entire dataset, so 1142 steps)
I had to use such high learning rates because due to the nature of the size of the dataset and captions it required it to attain the likeness I wanted for both the style and all of Korra's outfits.
There is much more to be said here regarding my workflow, experimentation, and the like, but I don't want to make this longer than necessary and this *is* already very long.
## Alternative Download Links
[Alternative download link for the model](https://www.dropbox.com/s/ayyk6c039gux7zs/4elements-diffusion.ckpt?dl=1)
[Alternative download link for the captioned dataset](https://www.dropbox.com/s/iobslrmyvdoi8oy/1142%20images%2C%20manually%20captioned%2C%20manual%20and%20automatic%20cropping%2C%20downscaled%20from%201024x1024.7z?dl=1)
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
Ching/negation_detector | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | Access to model wmduggan41/kd-distilBERT-clinc is restricted and you are not in the authorized list. Visit https://huggingface.co/wmduggan41/kd-distilBERT-clinc to ask for access. |
Chinmay/mlindia | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-21T19:37:03Z | ---
license: creativeml-openrail-m
---
Anything-V3.0 based StableDiffusion model with Dreambooth training based on the general artstyle of Daniel Conway. Trained for 2,400 steps using 30 total training images.
## Usage
Can be used in StableDiffusion, including the extremely popular Web UI by Automatic1111, like any other model by placing the .CKPT file in the correct directory. Please consult the documentation for your installation of StableDiffusion for more specific instructions.
Use the following tokens in your prompt to achieve the desired output.
Token: ```"dconway"``` Class: ```"illustration style"```
I have generally found the best results from using the token and class together at the beginning of the prompt. You can also try using one or the other or mixing them in other ways to achieve different outputs.
Example Prompt 1: ```"dconway illustration style, 1girl, pink hair, blue eyes, french braid, hair bun, single sidelock, adjusting hair, light smile, parted lips, looking at viewer, head tilt, atrium, bird cage, water, potted plant, clock, fountain, dappled sunlight, sunbeam, light rays, caustics, bloom, extremely detailed, intricate, masterpiece, best quality"```
Example Prompt 2: ```"dconway illustration style, a beautiful landscape with a river rushing towards a mountain range in the distance with clouds above, glacier, flower"```
For a more anime style try adding ```"3d model"``` to your negative prompt.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
Chiuchiyin/DialoGPT-small-Donald | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-11-21T19:40:36Z | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
This is the fine-tuned Stable Diffusion model trained on screenshots from The Clone wars TV series. Use the tokens "Clonewars style" in your prompts for the effect.
**If you enjoy my work, please consider supporting me:**
[](https://ko-fi.com/trystar)
## Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI run CloneDiffusion:
[](https://huggingface.co/spaces/akhaliq/CloneDiffusion)
**Star Wars Characters**

**How to use?**
Use prompt "clonewars style" before your full prompt.
I recommend Steps: 50, Sampler: Euler a and CFG scale: 7
This model was trained using the diffusers based dreambooth training by [TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb)
created by TryStar |
Chiuchiyin/Donald | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-23T11:04:11Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: BERiT_2000_2_layers_300_epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERiT_2000_2_layers_300_epochs
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 300
- label_smoothing_factor: 0.2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:------:|:---------------:|
| 14.9639 | 0.19 | 500 | 8.4999 |
| 7.8976 | 0.39 | 1000 | 7.3944 |
| 7.3281 | 0.58 | 1500 | 7.1320 |
| 7.1202 | 0.77 | 2000 | 7.0376 |
| 7.0738 | 0.97 | 2500 | 7.0277 |
| 7.0327 | 1.16 | 3000 | 6.9313 |
| 6.9775 | 1.36 | 3500 | 6.8580 |
| 6.9568 | 1.55 | 4000 | 6.7909 |
| 6.9242 | 1.74 | 4500 | 6.7869 |
| 6.8842 | 1.94 | 5000 | 6.7403 |
| 6.8904 | 2.13 | 5500 | 6.7860 |
| 6.8757 | 2.32 | 6000 | 6.7235 |
| 6.8164 | 2.52 | 6500 | 6.7383 |
| 6.8439 | 2.71 | 7000 | 6.6904 |
| 6.8074 | 2.9 | 7500 | 6.7116 |
| 6.79 | 3.1 | 8000 | 6.6995 |
| 6.7915 | 3.29 | 8500 | 6.6930 |
| 6.7664 | 3.49 | 9000 | 6.6794 |
| 6.7822 | 3.68 | 9500 | 6.6467 |
| 6.7585 | 3.87 | 10000 | 6.6787 |
| 6.7784 | 4.07 | 10500 | 6.6596 |
| 6.7344 | 4.26 | 11000 | 6.6315 |
| 6.7374 | 4.45 | 11500 | 6.7104 |
| 6.7309 | 4.65 | 12000 | 6.6566 |
| 6.728 | 4.84 | 12500 | 6.6726 |
| 6.7154 | 5.03 | 13000 | 6.6502 |
| 6.7159 | 5.23 | 13500 | 6.6477 |
| 6.7114 | 5.42 | 14000 | 6.6440 |
| 6.7111 | 5.62 | 14500 | 6.6685 |
| 6.7038 | 5.81 | 15000 | 6.6363 |
| 6.7037 | 6.0 | 15500 | 6.6036 |
| 6.7 | 6.2 | 16000 | 6.6199 |
| 6.6864 | 6.39 | 16500 | 6.5995 |
| 6.6944 | 6.58 | 17000 | 6.6211 |
| 6.6743 | 6.78 | 17500 | 6.6274 |
| 6.6519 | 6.97 | 18000 | 6.5919 |
| 6.6707 | 7.16 | 18500 | 6.6141 |
| 6.6722 | 7.36 | 19000 | 6.5356 |
| 6.6695 | 7.55 | 19500 | 6.5895 |
| 6.6699 | 7.75 | 20000 | 6.5913 |
| 6.6783 | 7.94 | 20500 | 6.6037 |
| 6.651 | 8.13 | 21000 | 6.6032 |
| 6.6415 | 8.33 | 21500 | 6.5818 |
| 6.6485 | 8.52 | 22000 | 6.5829 |
| 6.6232 | 8.71 | 22500 | 6.6029 |
| 6.6407 | 8.91 | 23000 | 6.5676 |
| 6.6265 | 9.1 | 23500 | 6.6313 |
| 6.6436 | 9.3 | 24000 | 6.5415 |
| 6.6196 | 9.49 | 24500 | 6.5655 |
| 6.6093 | 9.68 | 25000 | 6.5663 |
| 6.6354 | 9.88 | 25500 | 6.5946 |
| 6.6202 | 10.07 | 26000 | 6.5805 |
| 6.5849 | 10.26 | 26500 | 6.5799 |
| 6.6035 | 10.46 | 27000 | 6.5763 |
| 6.5922 | 10.65 | 27500 | 6.5716 |
| 6.5924 | 10.84 | 28000 | 6.5744 |
| 6.6083 | 11.04 | 28500 | 6.5326 |
| 6.5896 | 11.23 | 29000 | 6.5797 |
| 6.607 | 11.43 | 29500 | 6.5312 |
| 6.5942 | 11.62 | 30000 | 6.5917 |
| 6.5863 | 11.81 | 30500 | 6.5619 |
| 6.5841 | 12.01 | 31000 | 6.5590 |
| 6.548 | 12.2 | 31500 | 6.4872 |
| 6.5831 | 12.39 | 32000 | 6.5914 |
| 6.5577 | 12.59 | 32500 | 6.5784 |
| 6.5585 | 12.78 | 33000 | 6.5267 |
| 6.5722 | 12.97 | 33500 | 6.5539 |
| 6.5832 | 13.17 | 34000 | 6.5535 |
| 6.5704 | 13.36 | 34500 | 6.5624 |
| 6.5437 | 13.56 | 35000 | 6.5531 |
| 6.5492 | 13.75 | 35500 | 6.5616 |
| 6.5437 | 13.94 | 36000 | 6.5502 |
| 6.5652 | 14.14 | 36500 | 6.4985 |
| 6.5573 | 14.33 | 37000 | 6.5386 |
| 6.5523 | 14.52 | 37500 | 6.4916 |
| 6.5636 | 14.72 | 38000 | 6.5613 |
| 6.5485 | 14.91 | 38500 | 6.5201 |
| 6.5424 | 15.1 | 39000 | 6.5921 |
| 6.5429 | 15.3 | 39500 | 6.5397 |
| 6.5518 | 15.49 | 40000 | 6.5255 |
| 6.5362 | 15.69 | 40500 | 6.5129 |
| 6.5329 | 15.88 | 41000 | 6.5395 |
| 6.535 | 16.07 | 41500 | 6.5706 |
| 6.5367 | 16.27 | 42000 | 6.5382 |
| 6.5227 | 16.46 | 42500 | 6.5180 |
| 6.5019 | 16.65 | 43000 | 6.5454 |
| 6.5536 | 16.85 | 43500 | 6.5399 |
| 6.52 | 17.04 | 44000 | 6.5285 |
| 6.5117 | 17.23 | 44500 | 6.5488 |
| 6.5367 | 17.43 | 45000 | 6.5246 |
| 6.5167 | 17.62 | 45500 | 6.5400 |
| 6.531 | 17.82 | 46000 | 6.5299 |
| 6.5273 | 18.01 | 46500 | 6.4898 |
| 6.5035 | 18.2 | 47000 | 6.5093 |
| 6.4885 | 18.4 | 47500 | 6.5586 |
| 6.5234 | 18.59 | 48000 | 6.5677 |
| 6.5092 | 18.78 | 48500 | 6.4785 |
| 6.4866 | 18.98 | 49000 | 6.4909 |
| 6.4985 | 19.17 | 49500 | 6.5219 |
| 6.5003 | 19.36 | 50000 | 6.4935 |
| 6.5253 | 19.56 | 50500 | 6.4785 |
| 6.486 | 19.75 | 51000 | 6.5521 |
| 6.4977 | 19.95 | 51500 | 6.5230 |
| 6.4825 | 20.14 | 52000 | 6.5060 |
| 6.4925 | 20.33 | 52500 | 6.4851 |
| 6.5028 | 20.53 | 53000 | 6.5300 |
| 6.5019 | 20.72 | 53500 | 6.5044 |
| 6.4749 | 20.91 | 54000 | 6.4900 |
| 6.4724 | 21.11 | 54500 | 6.5211 |
| 6.4873 | 21.3 | 55000 | 6.4883 |
| 6.4979 | 21.49 | 55500 | 6.4993 |
| 6.4646 | 21.69 | 56000 | 6.5576 |
| 6.4888 | 21.88 | 56500 | 6.4719 |
| 6.4996 | 22.08 | 57000 | 6.4848 |
| 6.4694 | 22.27 | 57500 | 6.5130 |
| 6.4757 | 22.46 | 58000 | 6.4858 |
| 6.4744 | 22.66 | 58500 | 6.5284 |
| 6.4807 | 22.85 | 59000 | 6.4736 |
| 6.4873 | 23.04 | 59500 | 6.4829 |
| 6.4797 | 23.24 | 60000 | 6.5185 |
| 6.4675 | 23.43 | 60500 | 6.4920 |
| 6.4905 | 23.63 | 61000 | 6.5365 |
| 6.4659 | 23.82 | 61500 | 6.4717 |
| 6.4703 | 24.01 | 62000 | 6.4980 |
| 6.4654 | 24.21 | 62500 | 6.4492 |
| 6.4724 | 24.4 | 63000 | 6.5132 |
| 6.4939 | 24.59 | 63500 | 6.4642 |
| 6.4732 | 24.79 | 64000 | 6.4902 |
| 6.4781 | 24.98 | 64500 | 6.5341 |
| 6.4691 | 25.17 | 65000 | 6.5106 |
| 6.4644 | 25.37 | 65500 | 6.4463 |
| 6.4525 | 25.56 | 66000 | 6.4763 |
| 6.4423 | 25.76 | 66500 | 6.5226 |
| 6.4658 | 25.95 | 67000 | 6.4581 |
| 6.4624 | 26.14 | 67500 | 6.4748 |
| 6.4731 | 26.34 | 68000 | 6.4762 |
| 6.4381 | 26.53 | 68500 | 6.5184 |
| 6.4375 | 26.72 | 69000 | 6.4998 |
| 6.4559 | 26.92 | 69500 | 6.4751 |
| 6.4663 | 27.11 | 70000 | 6.4946 |
| 6.4551 | 27.3 | 70500 | 6.4495 |
| 6.4464 | 27.5 | 71000 | 6.4861 |
| 6.451 | 27.69 | 71500 | 6.4741 |
| 6.4491 | 27.89 | 72000 | 6.4275 |
| 6.4506 | 28.08 | 72500 | 6.4864 |
| 6.4262 | 28.27 | 73000 | 6.4839 |
| 6.4261 | 28.47 | 73500 | 6.4835 |
| 6.4408 | 28.66 | 74000 | 6.5073 |
| 6.4402 | 28.85 | 74500 | 6.4586 |
| 6.4414 | 29.05 | 75000 | 6.4639 |
| 6.453 | 29.24 | 75500 | 6.4764 |
| 6.4362 | 29.43 | 76000 | 6.5098 |
| 6.4262 | 29.63 | 76500 | 6.5176 |
| 6.4057 | 29.82 | 77000 | 6.5080 |
| 6.4393 | 30.02 | 77500 | 6.5053 |
| 6.4385 | 30.21 | 78000 | 6.4954 |
| 6.4592 | 30.4 | 78500 | 6.4517 |
| 6.4472 | 30.6 | 79000 | 6.4609 |
| 6.4099 | 30.79 | 79500 | 6.4770 |
| 6.3925 | 30.98 | 80000 | 6.4189 |
| 6.4423 | 31.18 | 80500 | 6.4781 |
| 6.4236 | 31.37 | 81000 | 6.4723 |
| 6.4315 | 31.56 | 81500 | 6.4890 |
| 6.4529 | 31.76 | 82000 | 6.5073 |
| 6.4292 | 31.95 | 82500 | 6.4460 |
| 6.4164 | 32.15 | 83000 | 6.4271 |
| 6.4124 | 32.34 | 83500 | 6.4864 |
| 6.4447 | 32.53 | 84000 | 6.4518 |
| 6.4161 | 32.73 | 84500 | 6.4543 |
| 6.4326 | 32.92 | 85000 | 6.4600 |
| 6.4209 | 33.11 | 85500 | 6.4686 |
| 6.4177 | 33.31 | 86000 | 6.4313 |
| 6.4317 | 33.5 | 86500 | 6.4893 |
| 6.4133 | 33.69 | 87000 | 6.4604 |
| 6.4331 | 33.89 | 87500 | 6.4411 |
| 6.4114 | 34.08 | 88000 | 6.4409 |
| 6.4202 | 34.28 | 88500 | 6.4300 |
| 6.4162 | 34.47 | 89000 | 6.4780 |
| 6.4305 | 34.66 | 89500 | 6.4473 |
| 6.412 | 34.86 | 90000 | 6.4621 |
| 6.4032 | 35.05 | 90500 | 6.4874 |
| 6.412 | 35.24 | 91000 | 6.4883 |
| 6.4088 | 35.44 | 91500 | 6.4290 |
| 6.4289 | 35.63 | 92000 | 6.4539 |
| 6.4101 | 35.82 | 92500 | 6.4571 |
| 6.3897 | 36.02 | 93000 | 6.4450 |
| 6.4122 | 36.21 | 93500 | 6.4488 |
| 6.412 | 36.41 | 94000 | 6.3988 |
| 6.4063 | 36.6 | 94500 | 6.4681 |
| 6.3905 | 36.79 | 95000 | 6.4018 |
| 6.3934 | 36.99 | 95500 | 6.4391 |
| 6.408 | 37.18 | 96000 | 6.4483 |
| 6.3968 | 37.37 | 96500 | 6.4651 |
| 6.3998 | 37.57 | 97000 | 6.4358 |
| 6.4061 | 37.76 | 97500 | 6.4524 |
| 6.4006 | 37.96 | 98000 | 6.4354 |
| 6.3871 | 38.15 | 98500 | 6.4286 |
| 6.3776 | 38.34 | 99000 | 6.4578 |
| 6.3997 | 38.54 | 99500 | 6.4358 |
| 6.3885 | 38.73 | 100000 | 6.4644 |
| 6.3923 | 38.92 | 100500 | 6.3955 |
| 6.3919 | 39.12 | 101000 | 6.4924 |
| 6.3814 | 39.31 | 101500 | 6.4437 |
| 6.3766 | 39.5 | 102000 | 6.4097 |
| 6.3889 | 39.7 | 102500 | 6.4231 |
| 6.3734 | 39.89 | 103000 | 6.4379 |
| 6.3926 | 40.09 | 103500 | 6.4474 |
| 6.3809 | 40.28 | 104000 | 6.4393 |
| 6.3738 | 40.47 | 104500 | 6.4199 |
| 6.3844 | 40.67 | 105000 | 6.4535 |
| 6.3654 | 40.86 | 105500 | 6.4676 |
| 6.3874 | 41.05 | 106000 | 6.4541 |
| 6.3622 | 41.25 | 106500 | 6.4522 |
| 6.3853 | 41.44 | 107000 | 6.4509 |
| 6.3858 | 41.63 | 107500 | 6.4682 |
| 6.3865 | 41.83 | 108000 | 6.3627 |
| 6.3838 | 42.02 | 108500 | 6.4209 |
| 6.3637 | 42.22 | 109000 | 6.4610 |
| 6.3836 | 42.41 | 109500 | 6.3808 |
| 6.3948 | 42.6 | 110000 | 6.4302 |
| 6.3619 | 42.8 | 110500 | 6.3986 |
| 6.3796 | 42.99 | 111000 | 6.3878 |
| 6.3881 | 43.18 | 111500 | 6.4563 |
| 6.3632 | 43.38 | 112000 | 6.4063 |
| 6.3509 | 43.57 | 112500 | 6.4481 |
| 6.3744 | 43.76 | 113000 | 6.4299 |
| 6.3418 | 43.96 | 113500 | 6.4200 |
| 6.3549 | 44.15 | 114000 | 6.4137 |
| 6.3534 | 44.35 | 114500 | 6.4691 |
| 6.3744 | 44.54 | 115000 | 6.4370 |
| 6.3637 | 44.73 | 115500 | 6.4239 |
| 6.3501 | 44.93 | 116000 | 6.4384 |
| 6.3738 | 45.12 | 116500 | 6.4248 |
| 6.3483 | 45.31 | 117000 | 6.4041 |
| 6.3908 | 45.51 | 117500 | 6.3876 |
| 6.3513 | 45.7 | 118000 | 6.3860 |
| 6.3587 | 45.89 | 118500 | 6.4781 |
| 6.3611 | 46.09 | 119000 | 6.4386 |
| 6.3418 | 46.28 | 119500 | 6.4188 |
| 6.3704 | 46.48 | 120000 | 6.3844 |
| 6.3775 | 46.67 | 120500 | 6.4102 |
| 6.3553 | 46.86 | 121000 | 6.4203 |
| 6.354 | 47.06 | 121500 | 6.3956 |
| 6.3586 | 47.25 | 122000 | 6.4365 |
| 6.3356 | 47.44 | 122500 | 6.4153 |
| 6.3627 | 47.64 | 123000 | 6.3749 |
| 6.3702 | 47.83 | 123500 | 6.4489 |
| 6.3356 | 48.02 | 124000 | 6.3944 |
| 6.3327 | 48.22 | 124500 | 6.3973 |
| 6.3545 | 48.41 | 125000 | 6.4039 |
| 6.358 | 48.61 | 125500 | 6.3921 |
| 6.3531 | 48.8 | 126000 | 6.4135 |
| 6.342 | 48.99 | 126500 | 6.4222 |
| 6.3625 | 49.19 | 127000 | 6.3813 |
| 6.3484 | 49.38 | 127500 | 6.4016 |
| 6.3492 | 49.57 | 128000 | 6.3944 |
| 6.3362 | 49.77 | 128500 | 6.4191 |
| 6.3495 | 49.96 | 129000 | 6.4099 |
| 6.3403 | 50.15 | 129500 | 6.3868 |
| 6.3231 | 50.35 | 130000 | 6.4068 |
| 6.3481 | 50.54 | 130500 | 6.4302 |
| 6.3641 | 50.74 | 131000 | 6.4025 |
| 6.3269 | 50.93 | 131500 | 6.3723 |
| 6.3605 | 51.12 | 132000 | 6.3974 |
| 6.3329 | 51.32 | 132500 | 6.4281 |
| 6.3783 | 51.51 | 133000 | 6.3982 |
| 6.3234 | 51.7 | 133500 | 6.3957 |
| 6.3497 | 51.9 | 134000 | 6.3913 |
| 6.3313 | 52.09 | 134500 | 6.4325 |
| 6.348 | 52.29 | 135000 | 6.3923 |
| 6.3291 | 52.48 | 135500 | 6.3462 |
| 6.3503 | 52.67 | 136000 | 6.3498 |
| 6.3202 | 52.87 | 136500 | 6.4250 |
| 6.3419 | 53.06 | 137000 | 6.3549 |
| 6.3375 | 53.25 | 137500 | 6.3781 |
| 6.3492 | 53.45 | 138000 | 6.3718 |
| 6.3237 | 53.64 | 138500 | 6.3962 |
| 6.328 | 53.83 | 139000 | 6.3892 |
| 6.3251 | 54.03 | 139500 | 6.4056 |
| 6.3297 | 54.22 | 140000 | 6.3886 |
| 6.328 | 54.42 | 140500 | 6.4028 |
| 6.3233 | 54.61 | 141000 | 6.3649 |
| 6.3379 | 54.8 | 141500 | 6.4070 |
| 6.3152 | 55.0 | 142000 | 6.4084 |
| 6.3409 | 55.19 | 142500 | 6.3630 |
| 6.3249 | 55.38 | 143000 | 6.3896 |
| 6.3148 | 55.58 | 143500 | 6.3882 |
| 6.3256 | 55.77 | 144000 | 6.3662 |
| 6.3176 | 55.96 | 144500 | 6.3843 |
| 6.295 | 56.16 | 145000 | 6.3652 |
| 6.3331 | 56.35 | 145500 | 6.4390 |
| 6.314 | 56.55 | 146000 | 6.3578 |
| 6.3305 | 56.74 | 146500 | 6.3335 |
| 6.3614 | 56.93 | 147000 | 6.3514 |
| 6.3556 | 57.13 | 147500 | 6.3592 |
| 6.3171 | 57.32 | 148000 | 6.3760 |
| 6.2904 | 57.51 | 148500 | 6.3886 |
| 6.3402 | 57.71 | 149000 | 6.3818 |
| 6.3265 | 57.9 | 149500 | 6.3572 |
| 6.3293 | 58.09 | 150000 | 6.3144 |
| 6.3169 | 58.29 | 150500 | 6.3792 |
| 6.3188 | 58.48 | 151000 | 6.3777 |
| 6.31 | 58.68 | 151500 | 6.3524 |
| 6.3091 | 58.87 | 152000 | 6.3450 |
| 6.2778 | 59.06 | 152500 | 6.3745 |
| 6.3019 | 59.26 | 153000 | 6.3503 |
| 6.293 | 59.45 | 153500 | 6.3432 |
| 6.3083 | 59.64 | 154000 | 6.3699 |
| 6.3324 | 59.84 | 154500 | 6.3354 |
| 6.3273 | 60.03 | 155000 | 6.3313 |
| 6.3186 | 60.22 | 155500 | 6.3619 |
| 6.296 | 60.42 | 156000 | 6.3852 |
| 6.3293 | 60.61 | 156500 | 6.3197 |
| 6.3143 | 60.81 | 157000 | 6.3526 |
| 6.3262 | 61.0 | 157500 | 6.3637 |
| 6.3045 | 61.19 | 158000 | 6.3603 |
| 6.2767 | 61.39 | 158500 | 6.4061 |
| 6.3032 | 61.58 | 159000 | 6.3877 |
| 6.2984 | 61.77 | 159500 | 6.4006 |
| 6.2887 | 61.97 | 160000 | 6.3599 |
| 6.2977 | 62.16 | 160500 | 6.3598 |
| 6.2865 | 62.35 | 161000 | 6.3446 |
| 6.3158 | 62.55 | 161500 | 6.3216 |
| 6.2867 | 62.74 | 162000 | 6.3698 |
| 6.2886 | 62.94 | 162500 | 6.3701 |
| 6.2752 | 63.13 | 163000 | 6.3066 |
| 6.2996 | 63.32 | 163500 | 6.3188 |
| 6.2919 | 63.52 | 164000 | 6.3131 |
| 6.3029 | 63.71 | 164500 | 6.2848 |
| 6.3074 | 63.9 | 165000 | 6.3071 |
| 6.2801 | 64.1 | 165500 | 6.3065 |
| 6.278 | 64.29 | 166000 | 6.2901 |
| 6.2701 | 64.48 | 166500 | 6.3544 |
| 6.2851 | 64.68 | 167000 | 6.3970 |
| 6.2829 | 64.87 | 167500 | 6.3621 |
| 6.2734 | 65.07 | 168000 | 6.3246 |
| 6.2982 | 65.26 | 168500 | 6.3342 |
| 6.2894 | 65.45 | 169000 | 6.3202 |
| 6.3093 | 65.65 | 169500 | 6.2975 |
| 6.2948 | 65.84 | 170000 | 6.3127 |
| 6.2872 | 66.03 | 170500 | 6.3311 |
| 6.267 | 66.23 | 171000 | 6.3159 |
| 6.2776 | 66.42 | 171500 | 6.2875 |
| 6.2794 | 66.62 | 172000 | 6.3315 |
| 6.2785 | 66.81 | 172500 | 6.3520 |
| 6.273 | 67.0 | 173000 | 6.3275 |
| 6.2821 | 67.2 | 173500 | 6.3348 |
| 6.2906 | 67.39 | 174000 | 6.2945 |
| 6.2839 | 67.58 | 174500 | 6.3456 |
| 6.272 | 67.78 | 175000 | 6.2964 |
| 6.2615 | 67.97 | 175500 | 6.3155 |
| 6.2838 | 68.16 | 176000 | 6.2967 |
| 6.2844 | 68.36 | 176500 | 6.3465 |
| 6.2554 | 68.55 | 177000 | 6.2919 |
| 6.3059 | 68.75 | 177500 | 6.2598 |
| 6.2793 | 68.94 | 178000 | 6.3347 |
| 6.2826 | 69.13 | 178500 | 6.2848 |
| 6.2609 | 69.33 | 179000 | 6.3692 |
| 6.2544 | 69.52 | 179500 | 6.3168 |
| 6.247 | 69.71 | 180000 | 6.3294 |
| 6.2493 | 69.91 | 180500 | 6.3097 |
| 6.2649 | 70.1 | 181000 | 6.3144 |
| 6.2606 | 70.29 | 181500 | 6.2910 |
| 6.2736 | 70.49 | 182000 | 6.3298 |
| 6.2425 | 70.68 | 182500 | 6.2905 |
| 6.25 | 70.88 | 183000 | 6.3027 |
| 6.2808 | 71.07 | 183500 | 6.2956 |
| 6.2782 | 71.26 | 184000 | 6.2946 |
| 6.2733 | 71.46 | 184500 | 6.2950 |
| 6.2669 | 71.65 | 185000 | 6.3152 |
| 6.2396 | 71.84 | 185500 | 6.3045 |
| 6.2881 | 72.04 | 186000 | 6.2768 |
| 6.2551 | 72.23 | 186500 | 6.2618 |
| 6.2352 | 72.42 | 187000 | 6.2557 |
| 6.2641 | 72.62 | 187500 | 6.2660 |
| 6.2432 | 72.81 | 188000 | 6.2997 |
| 6.2313 | 73.01 | 188500 | 6.3202 |
| 6.2562 | 73.2 | 189000 | 6.2877 |
| 6.2565 | 73.39 | 189500 | 6.2659 |
| 6.2728 | 73.59 | 190000 | 6.2763 |
| 6.2418 | 73.78 | 190500 | 6.2567 |
| 6.2704 | 73.97 | 191000 | 6.2568 |
| 6.2519 | 74.17 | 191500 | 6.2518 |
| 6.2794 | 74.36 | 192000 | 6.2631 |
| 6.2542 | 74.55 | 192500 | 6.2913 |
| 6.2501 | 74.75 | 193000 | 6.2927 |
| 6.2576 | 74.94 | 193500 | 6.2690 |
| 6.2661 | 75.14 | 194000 | 6.2881 |
| 6.2403 | 75.33 | 194500 | 6.2597 |
| 6.2379 | 75.52 | 195000 | 6.2629 |
| 6.2377 | 75.72 | 195500 | 6.2682 |
| 6.2115 | 75.91 | 196000 | 6.3002 |
| 6.226 | 76.1 | 196500 | 6.2506 |
| 6.2485 | 76.3 | 197000 | 6.2723 |
| 6.2326 | 76.49 | 197500 | 6.3033 |
| 6.2481 | 76.68 | 198000 | 6.2514 |
| 6.2526 | 76.88 | 198500 | 6.2639 |
| 6.2514 | 77.07 | 199000 | 6.2670 |
| 6.2308 | 77.27 | 199500 | 6.2644 |
| 6.2482 | 77.46 | 200000 | 6.2931 |
| 6.2278 | 77.65 | 200500 | 6.2476 |
| 6.2441 | 77.85 | 201000 | 6.1998 |
| 6.2328 | 78.04 | 201500 | 6.2583 |
| 6.241 | 78.23 | 202000 | 6.2229 |
| 6.2148 | 78.43 | 202500 | 6.2684 |
| 6.2262 | 78.62 | 203000 | 6.2946 |
| 6.2563 | 78.81 | 203500 | 6.2377 |
| 6.2019 | 79.01 | 204000 | 6.2411 |
| 6.2158 | 79.2 | 204500 | 6.2526 |
| 6.2382 | 79.4 | 205000 | 6.2308 |
| 6.2263 | 79.59 | 205500 | 6.2544 |
| 6.2097 | 79.78 | 206000 | 6.2356 |
| 6.2072 | 79.98 | 206500 | 6.2554 |
| 6.216 | 80.17 | 207000 | 6.2388 |
| 6.2019 | 80.36 | 207500 | 6.2589 |
| 6.2537 | 80.56 | 208000 | 6.2347 |
| 6.2253 | 80.75 | 208500 | 6.2654 |
| 6.2352 | 80.95 | 209000 | 6.1939 |
| 6.2309 | 81.14 | 209500 | 6.2902 |
| 6.1946 | 81.33 | 210000 | 6.2101 |
| 6.2189 | 81.53 | 210500 | 6.2582 |
| 6.2307 | 81.72 | 211000 | 6.2035 |
| 6.2137 | 81.91 | 211500 | 6.2357 |
| 6.2442 | 82.11 | 212000 | 6.2110 |
| 6.2493 | 82.3 | 212500 | 6.1889 |
| 6.2164 | 82.49 | 213000 | 6.2404 |
| 6.1968 | 82.69 | 213500 | 6.2383 |
| 6.2159 | 82.88 | 214000 | 6.2831 |
| 6.2115 | 83.08 | 214500 | 6.1869 |
| 6.2043 | 83.27 | 215000 | 6.2010 |
| 6.2163 | 83.46 | 215500 | 6.2458 |
| 6.1923 | 83.66 | 216000 | 6.1991 |
| 6.193 | 83.85 | 216500 | 6.2134 |
| 6.1885 | 84.04 | 217000 | 6.2060 |
| 6.1987 | 84.24 | 217500 | 6.2167 |
| 6.2178 | 84.43 | 218000 | 6.2093 |
| 6.1902 | 84.62 | 218500 | 6.1998 |
| 6.1993 | 84.82 | 219000 | 6.2215 |
| 6.1846 | 85.01 | 219500 | 6.2175 |
| 6.1994 | 85.21 | 220000 | 6.1620 |
| 6.2197 | 85.4 | 220500 | 6.1733 |
| 6.1873 | 85.59 | 221000 | 6.2190 |
| 6.2143 | 85.79 | 221500 | 6.1990 |
| 6.1939 | 85.98 | 222000 | 6.1844 |
| 6.2026 | 86.17 | 222500 | 6.1697 |
| 6.2153 | 86.37 | 223000 | 6.1711 |
| 6.179 | 86.56 | 223500 | 6.1625 |
| 6.1904 | 86.75 | 224000 | 6.1856 |
| 6.1703 | 86.95 | 224500 | 6.1340 |
| 6.1766 | 87.14 | 225000 | 6.2077 |
| 6.1807 | 87.34 | 225500 | 6.2494 |
| 6.1677 | 87.53 | 226000 | 6.1723 |
| 6.1902 | 87.72 | 226500 | 6.1880 |
| 6.2089 | 87.92 | 227000 | 6.1989 |
| 6.1794 | 88.11 | 227500 | 6.1637 |
| 6.1819 | 88.3 | 228000 | 6.1616 |
| 6.2141 | 88.5 | 228500 | 6.1359 |
| 6.181 | 88.69 | 229000 | 6.1380 |
| 6.1806 | 88.88 | 229500 | 6.1295 |
| 6.1877 | 89.08 | 230000 | 6.1433 |
| 6.1691 | 89.27 | 230500 | 6.1871 |
| 6.1444 | 89.47 | 231000 | 6.1767 |
| 6.1818 | 89.66 | 231500 | 6.1645 |
| 6.1764 | 89.85 | 232000 | 6.1641 |
| 6.216 | 90.05 | 232500 | 6.1159 |
| 6.1565 | 90.24 | 233000 | 6.1216 |
| 6.1665 | 90.43 | 233500 | 6.1386 |
| 6.1926 | 90.63 | 234000 | 6.1475 |
| 6.1786 | 90.82 | 234500 | 6.1157 |
| 6.193 | 91.01 | 235000 | 6.1285 |
| 6.1893 | 91.21 | 235500 | 6.1640 |
| 6.1677 | 91.4 | 236000 | 6.1405 |
| 6.1872 | 91.6 | 236500 | 6.0972 |
| 6.153 | 91.79 | 237000 | 6.1382 |
| 6.1652 | 91.98 | 237500 | 6.1195 |
| 6.1636 | 92.18 | 238000 | 6.0942 |
| 6.1589 | 92.37 | 238500 | 6.1100 |
| 6.1431 | 92.56 | 239000 | 6.1309 |
| 6.157 | 92.76 | 239500 | 6.1527 |
| 6.1698 | 92.95 | 240000 | 6.1463 |
| 6.1726 | 93.14 | 240500 | 6.1063 |
| 6.1638 | 93.34 | 241000 | 6.0897 |
| 6.1587 | 93.53 | 241500 | 6.1265 |
| 6.1723 | 93.73 | 242000 | 6.1383 |
| 6.1472 | 93.92 | 242500 | 6.0735 |
| 6.1774 | 94.11 | 243000 | 6.1021 |
| 6.1205 | 94.31 | 243500 | 6.1257 |
| 6.1624 | 94.5 | 244000 | 6.0797 |
| 6.1438 | 94.69 | 244500 | 6.1059 |
| 6.1722 | 94.89 | 245000 | 6.1110 |
| 6.1602 | 95.08 | 245500 | 6.0810 |
| 6.1423 | 95.27 | 246000 | 6.0668 |
| 6.1424 | 95.47 | 246500 | 6.1259 |
| 6.1472 | 95.66 | 247000 | 6.1133 |
| 6.1721 | 95.86 | 247500 | 6.0732 |
| 6.1389 | 96.05 | 248000 | 6.1028 |
| 6.1246 | 96.24 | 248500 | 6.1174 |
| 6.1285 | 96.44 | 249000 | 6.1167 |
| 6.1481 | 96.63 | 249500 | 6.0627 |
| 6.14 | 96.82 | 250000 | 6.0413 |
| 6.1426 | 97.02 | 250500 | 6.1137 |
| 6.1138 | 97.21 | 251000 | 6.0706 |
| 6.1153 | 97.41 | 251500 | 6.0864 |
| 6.1662 | 97.6 | 252000 | 6.0970 |
| 6.1157 | 97.79 | 252500 | 6.0543 |
| 6.129 | 97.99 | 253000 | 6.0617 |
| 6.1257 | 98.18 | 253500 | 6.0196 |
| 6.1188 | 98.37 | 254000 | 6.0871 |
| 6.1077 | 98.57 | 254500 | 6.0634 |
| 6.1202 | 98.76 | 255000 | 6.0254 |
| 6.1276 | 98.95 | 255500 | 6.1073 |
| 6.1105 | 99.15 | 256000 | 6.0030 |
| 6.105 | 99.34 | 256500 | 6.0244 |
| 6.1072 | 99.54 | 257000 | 6.0833 |
| 6.1061 | 99.73 | 257500 | 6.0157 |
| 6.1076 | 99.92 | 258000 | 6.0297 |
| 6.1397 | 100.12 | 258500 | 6.0709 |
| 6.1106 | 100.31 | 259000 | 6.0028 |
| 6.1141 | 100.5 | 259500 | 6.0651 |
| 6.1342 | 100.7 | 260000 | 6.0409 |
| 6.1062 | 100.89 | 260500 | 5.9981 |
| 6.1108 | 101.08 | 261000 | 5.9928 |
| 6.1198 | 101.28 | 261500 | 6.0348 |
| 6.1311 | 101.47 | 262000 | 6.0392 |
| 6.1215 | 101.67 | 262500 | 6.0286 |
| 6.0773 | 101.86 | 263000 | 6.0042 |
| 6.1002 | 102.05 | 263500 | 6.0400 |
| 6.084 | 102.25 | 264000 | 6.0476 |
| 6.1023 | 102.44 | 264500 | 6.0125 |
| 6.1006 | 102.63 | 265000 | 6.0086 |
| 6.1284 | 102.83 | 265500 | 5.9758 |
| 6.1001 | 103.02 | 266000 | 6.0136 |
| 6.1029 | 103.21 | 266500 | 5.9535 |
| 6.085 | 103.41 | 267000 | 5.9307 |
| 6.085 | 103.6 | 267500 | 5.9810 |
| 6.0918 | 103.8 | 268000 | 5.9972 |
| 6.0899 | 103.99 | 268500 | 6.0040 |
| 6.108 | 104.18 | 269000 | 5.9606 |
| 6.0835 | 104.38 | 269500 | 6.0150 |
| 6.0984 | 104.57 | 270000 | 5.9414 |
| 6.0727 | 104.76 | 270500 | 5.9904 |
| 6.0962 | 104.96 | 271000 | 5.9662 |
| 6.0813 | 105.15 | 271500 | 5.9947 |
| 6.105 | 105.34 | 272000 | 5.9831 |
| 6.0765 | 105.54 | 272500 | 6.0098 |
| 6.0748 | 105.73 | 273000 | 5.9466 |
| 6.0643 | 105.93 | 273500 | 5.9434 |
| 6.0818 | 106.12 | 274000 | 5.9881 |
| 6.0775 | 106.31 | 274500 | 6.0043 |
| 6.088 | 106.51 | 275000 | 5.9833 |
| 6.0981 | 106.7 | 275500 | 5.9426 |
| 6.0565 | 106.89 | 276000 | 5.9937 |
| 6.0769 | 107.09 | 276500 | 5.9498 |
| 6.0615 | 107.28 | 277000 | 5.9442 |
| 6.0802 | 107.47 | 277500 | 5.9181 |
| 6.0732 | 107.67 | 278000 | 5.9088 |
| 6.0626 | 107.86 | 278500 | 5.9383 |
| 6.0914 | 108.06 | 279000 | 5.9347 |
| 6.0359 | 108.25 | 279500 | 5.9666 |
| 6.0672 | 108.44 | 280000 | 5.9783 |
| 6.0726 | 108.64 | 280500 | 5.8990 |
| 6.0677 | 108.83 | 281000 | 5.9633 |
| 6.0641 | 109.02 | 281500 | 5.9010 |
| 6.0415 | 109.22 | 282000 | 5.9579 |
| 6.0544 | 109.41 | 282500 | 5.9360 |
| 6.0775 | 109.6 | 283000 | 5.9221 |
| 6.0786 | 109.8 | 283500 | 5.8871 |
| 6.0598 | 109.99 | 284000 | 5.9277 |
| 6.0783 | 110.19 | 284500 | 5.9164 |
| 6.0499 | 110.38 | 285000 | 5.9539 |
| 6.0655 | 110.57 | 285500 | 5.8884 |
| 6.054 | 110.77 | 286000 | 5.8377 |
| 6.0548 | 110.96 | 286500 | 5.8962 |
| 6.0543 | 111.15 | 287000 | 5.9042 |
| 6.0446 | 111.35 | 287500 | 5.9362 |
| 6.0429 | 111.54 | 288000 | 5.9378 |
| 6.0564 | 111.74 | 288500 | 5.9262 |
| 6.0559 | 111.93 | 289000 | 5.8897 |
| 6.0267 | 112.12 | 289500 | 5.8988 |
| 6.0402 | 112.32 | 290000 | 5.8629 |
| 6.0353 | 112.51 | 290500 | 5.8836 |
| 6.0337 | 112.7 | 291000 | 5.9077 |
| 6.0556 | 112.9 | 291500 | 5.8944 |
| 6.006 | 113.09 | 292000 | 5.8421 |
| 6.0253 | 113.28 | 292500 | 5.8627 |
| 6.0431 | 113.48 | 293000 | 5.8871 |
| 6.0452 | 113.67 | 293500 | 5.9370 |
| 6.0406 | 113.87 | 294000 | 5.8726 |
| 6.0383 | 114.06 | 294500 | 5.8940 |
| 6.0168 | 114.25 | 295000 | 5.9241 |
| 6.0144 | 114.45 | 295500 | 5.8618 |
| 6.0422 | 114.64 | 296000 | 5.8867 |
| 6.0353 | 114.83 | 296500 | 5.8656 |
| 6.0176 | 115.03 | 297000 | 5.8710 |
| 6.0351 | 115.22 | 297500 | 5.8750 |
| 6.0387 | 115.41 | 298000 | 5.8251 |
| 6.0369 | 115.61 | 298500 | 5.8821 |
| 5.9935 | 115.8 | 299000 | 5.8763 |
| 6.0324 | 116.0 | 299500 | 5.8195 |
| 6.016 | 116.19 | 300000 | 5.9093 |
| 6.0085 | 116.38 | 300500 | 5.8991 |
| 6.0163 | 116.58 | 301000 | 5.8530 |
| 5.9794 | 116.77 | 301500 | 5.8573 |
| 6.0053 | 116.96 | 302000 | 5.8403 |
| 5.9691 | 117.16 | 302500 | 5.8189 |
| 6.0235 | 117.35 | 303000 | 5.8071 |
| 6.0432 | 117.54 | 303500 | 5.7983 |
| 6.0167 | 117.74 | 304000 | 5.8640 |
| 5.9905 | 117.93 | 304500 | 5.8887 |
| 5.9941 | 118.13 | 305000 | 5.8196 |
| 6.0021 | 118.32 | 305500 | 5.8368 |
| 5.9802 | 118.51 | 306000 | 5.8229 |
| 5.9773 | 118.71 | 306500 | 5.8570 |
| 5.9757 | 118.9 | 307000 | 5.7777 |
| 6.0091 | 119.09 | 307500 | 5.7950 |
| 5.9971 | 119.29 | 308000 | 5.8058 |
| 5.9846 | 119.48 | 308500 | 5.8305 |
| 5.988 | 119.67 | 309000 | 5.7729 |
| 5.9825 | 119.87 | 309500 | 5.7965 |
| 6.0092 | 120.06 | 310000 | 5.7714 |
| 6.0 | 120.26 | 310500 | 5.8226 |
| 5.9562 | 120.45 | 311000 | 5.7942 |
| 5.9945 | 120.64 | 311500 | 5.7819 |
| 5.9627 | 120.84 | 312000 | 5.8089 |
| 5.9931 | 121.03 | 312500 | 5.8007 |
| 5.9671 | 121.22 | 313000 | 5.8015 |
| 6.001 | 121.42 | 313500 | 5.7838 |
| 5.983 | 121.61 | 314000 | 5.8071 |
| 5.9861 | 121.8 | 314500 | 5.8000 |
| 5.9767 | 122.0 | 315000 | 5.7423 |
| 5.9704 | 122.19 | 315500 | 5.7823 |
| 5.9561 | 122.39 | 316000 | 5.7528 |
| 5.9631 | 122.58 | 316500 | 5.7772 |
| 5.9732 | 122.77 | 317000 | 5.7773 |
| 5.9914 | 122.97 | 317500 | 5.7848 |
| 5.987 | 123.16 | 318000 | 5.7447 |
| 5.9451 | 123.35 | 318500 | 5.7845 |
| 5.9494 | 123.55 | 319000 | 5.7627 |
| 5.9717 | 123.74 | 319500 | 5.7585 |
| 5.9437 | 123.93 | 320000 | 5.7714 |
| 5.9714 | 124.13 | 320500 | 5.7679 |
| 5.9405 | 124.32 | 321000 | 5.7276 |
| 5.9532 | 124.52 | 321500 | 5.7943 |
| 5.9563 | 124.71 | 322000 | 5.7375 |
| 5.956 | 124.9 | 322500 | 5.7355 |
| 5.9469 | 125.1 | 323000 | 5.7351 |
| 5.9721 | 125.29 | 323500 | 5.7592 |
| 5.9573 | 125.48 | 324000 | 5.7352 |
| 5.9558 | 125.68 | 324500 | 5.7532 |
| 5.9481 | 125.87 | 325000 | 5.7344 |
| 5.962 | 126.07 | 325500 | 5.7352 |
| 5.9668 | 126.26 | 326000 | 5.7034 |
| 5.9436 | 126.45 | 326500 | 5.7157 |
| 5.9579 | 126.65 | 327000 | 5.7318 |
| 5.924 | 126.84 | 327500 | 5.6861 |
| 5.9429 | 127.03 | 328000 | 5.7517 |
| 5.9263 | 127.23 | 328500 | 5.7812 |
| 5.9501 | 127.42 | 329000 | 5.7444 |
| 5.9481 | 127.61 | 329500 | 5.6990 |
| 5.9563 | 127.81 | 330000 | 5.7232 |
| 5.9362 | 128.0 | 330500 | 5.7270 |
| 5.9223 | 128.2 | 331000 | 5.7522 |
| 5.9314 | 128.39 | 331500 | 5.7059 |
| 5.9335 | 128.58 | 332000 | 5.7011 |
| 5.9314 | 128.78 | 332500 | 5.7114 |
| 5.9476 | 128.97 | 333000 | 5.6984 |
| 5.9133 | 129.16 | 333500 | 5.7490 |
| 5.9616 | 129.36 | 334000 | 5.7261 |
| 5.9224 | 129.55 | 334500 | 5.6712 |
| 5.9301 | 129.74 | 335000 | 5.7070 |
| 5.9273 | 129.94 | 335500 | 5.6583 |
| 5.9176 | 130.13 | 336000 | 5.6984 |
| 5.9181 | 130.33 | 336500 | 5.6638 |
| 5.9331 | 130.52 | 337000 | 5.6596 |
| 5.9161 | 130.71 | 337500 | 5.6462 |
| 5.8896 | 130.91 | 338000 | 5.7193 |
| 5.906 | 131.1 | 338500 | 5.6919 |
| 5.9277 | 131.29 | 339000 | 5.7109 |
| 5.917 | 131.49 | 339500 | 5.7309 |
| 5.9208 | 131.68 | 340000 | 5.6484 |
| 5.9108 | 131.87 | 340500 | 5.7129 |
| 5.9192 | 132.07 | 341000 | 5.6477 |
| 5.9108 | 132.26 | 341500 | 5.6546 |
| 5.8858 | 132.46 | 342000 | 5.6823 |
| 5.9272 | 132.65 | 342500 | 5.6619 |
| 5.9104 | 132.84 | 343000 | 5.6446 |
| 5.8863 | 133.04 | 343500 | 5.6903 |
| 5.9221 | 133.23 | 344000 | 5.6717 |
| 5.9181 | 133.42 | 344500 | 5.6931 |
| 5.8639 | 133.62 | 345000 | 5.6886 |
| 5.9569 | 133.81 | 345500 | 5.6852 |
| 5.9086 | 134.0 | 346000 | 5.6531 |
| 5.9009 | 134.2 | 346500 | 5.6950 |
| 5.9131 | 134.39 | 347000 | 5.6686 |
| 5.9135 | 134.59 | 347500 | 5.6983 |
| 5.9059 | 134.78 | 348000 | 5.6516 |
| 5.8808 | 134.97 | 348500 | 5.6244 |
| 5.8817 | 135.17 | 349000 | 5.6266 |
| 5.8753 | 135.36 | 349500 | 5.6479 |
| 5.8801 | 135.55 | 350000 | 5.6431 |
| 5.8649 | 135.75 | 350500 | 5.6959 |
| 5.8893 | 135.94 | 351000 | 5.6552 |
| 5.8809 | 136.13 | 351500 | 5.6294 |
| 5.8763 | 136.33 | 352000 | 5.5950 |
| 5.8668 | 136.52 | 352500 | 5.6509 |
| 5.8815 | 136.72 | 353000 | 5.6334 |
| 5.884 | 136.91 | 353500 | 5.6059 |
| 5.8801 | 137.1 | 354000 | 5.6690 |
| 5.8969 | 137.3 | 354500 | 5.5998 |
| 5.8768 | 137.49 | 355000 | 5.6211 |
| 5.8703 | 137.68 | 355500 | 5.6612 |
| 5.8759 | 137.88 | 356000 | 5.5840 |
| 5.8714 | 138.07 | 356500 | 5.5737 |
| 5.8848 | 138.26 | 357000 | 5.6426 |
| 5.8477 | 138.46 | 357500 | 5.6164 |
| 5.8549 | 138.65 | 358000 | 5.6253 |
| 5.863 | 138.85 | 358500 | 5.6246 |
| 5.8729 | 139.04 | 359000 | 5.6626 |
| 5.8503 | 139.23 | 359500 | 5.6267 |
| 5.844 | 139.43 | 360000 | 5.6095 |
| 5.8388 | 139.62 | 360500 | 5.6281 |
| 5.846 | 139.81 | 361000 | 5.6648 |
| 5.8621 | 140.01 | 361500 | 5.6222 |
| 5.8595 | 140.2 | 362000 | 5.5792 |
| 5.8632 | 140.4 | 362500 | 5.5882 |
| 5.8598 | 140.59 | 363000 | 5.5988 |
| 5.8528 | 140.78 | 363500 | 5.5913 |
| 5.8632 | 140.98 | 364000 | 5.5803 |
| 5.8408 | 141.17 | 364500 | 5.5976 |
| 5.8687 | 141.36 | 365000 | 5.5876 |
| 5.8236 | 141.56 | 365500 | 5.6500 |
| 5.8713 | 141.75 | 366000 | 5.5915 |
| 5.8684 | 141.94 | 366500 | 5.6197 |
| 5.8592 | 142.14 | 367000 | 5.5516 |
| 5.8548 | 142.33 | 367500 | 5.5978 |
| 5.8483 | 142.53 | 368000 | 5.5751 |
| 5.8428 | 142.72 | 368500 | 5.6102 |
| 5.8305 | 142.91 | 369000 | 5.5387 |
| 5.8211 | 143.11 | 369500 | 5.5782 |
| 5.8425 | 143.3 | 370000 | 5.5443 |
| 5.8089 | 143.49 | 370500 | 5.5261 |
| 5.818 | 143.69 | 371000 | 5.5743 |
| 5.874 | 143.88 | 371500 | 5.5478 |
| 5.7944 | 144.07 | 372000 | 5.5818 |
| 5.8595 | 144.27 | 372500 | 5.5393 |
| 5.8456 | 144.46 | 373000 | 5.5713 |
| 5.8278 | 144.66 | 373500 | 5.5661 |
| 5.8337 | 144.85 | 374000 | 5.5628 |
| 5.8421 | 145.04 | 374500 | 5.6046 |
| 5.8462 | 145.24 | 375000 | 5.5581 |
| 5.8205 | 145.43 | 375500 | 5.5547 |
| 5.8076 | 145.62 | 376000 | 5.5323 |
| 5.8244 | 145.82 | 376500 | 5.5266 |
| 5.8509 | 146.01 | 377000 | 5.5014 |
| 5.815 | 146.2 | 377500 | 5.5106 |
| 5.8371 | 146.4 | 378000 | 5.5998 |
| 5.8157 | 146.59 | 378500 | 5.5538 |
| 5.8436 | 146.79 | 379000 | 5.5187 |
| 5.8205 | 146.98 | 379500 | 5.5724 |
| 5.8312 | 147.17 | 380000 | 5.5023 |
| 5.8223 | 147.37 | 380500 | 5.5392 |
| 5.8202 | 147.56 | 381000 | 5.5574 |
| 5.7997 | 147.75 | 381500 | 5.5587 |
| 5.824 | 147.95 | 382000 | 5.5293 |
| 5.8008 | 148.14 | 382500 | 5.5805 |
| 5.8229 | 148.33 | 383000 | 5.5611 |
| 5.8047 | 148.53 | 383500 | 5.5052 |
| 5.8054 | 148.72 | 384000 | 5.6634 |
| 5.805 | 148.92 | 384500 | 5.5414 |
| 5.8054 | 149.11 | 385000 | 5.5301 |
| 5.8028 | 149.3 | 385500 | 5.5031 |
| 5.822 | 149.5 | 386000 | 5.5315 |
| 5.7946 | 149.69 | 386500 | 5.5576 |
| 5.7915 | 149.88 | 387000 | 5.5596 |
| 5.8203 | 150.08 | 387500 | 5.5502 |
| 5.7824 | 150.27 | 388000 | 5.5722 |
| 5.7706 | 150.46 | 388500 | 5.5451 |
| 5.8074 | 150.66 | 389000 | 5.5307 |
| 5.8216 | 150.85 | 389500 | 5.5555 |
| 5.7996 | 151.05 | 390000 | 5.5039 |
| 5.8076 | 151.24 | 390500 | 5.5535 |
| 5.7969 | 151.43 | 391000 | 5.5254 |
| 5.7884 | 151.63 | 391500 | 5.5390 |
| 5.7691 | 151.82 | 392000 | 5.5186 |
| 5.7964 | 152.01 | 392500 | 5.5439 |
| 5.7907 | 152.21 | 393000 | 5.5262 |
| 5.7896 | 152.4 | 393500 | 5.5059 |
| 5.7943 | 152.59 | 394000 | 5.5126 |
| 5.81 | 152.79 | 394500 | 5.4547 |
| 5.7981 | 152.98 | 395000 | 5.5141 |
| 5.7845 | 153.18 | 395500 | 5.5964 |
| 5.7919 | 153.37 | 396000 | 5.4650 |
| 5.8165 | 153.56 | 396500 | 5.5123 |
| 5.7675 | 153.76 | 397000 | 5.5191 |
| 5.7473 | 153.95 | 397500 | 5.5018 |
| 5.7774 | 154.14 | 398000 | 5.4447 |
| 5.7875 | 154.34 | 398500 | 5.4997 |
| 5.7614 | 154.53 | 399000 | 5.5125 |
| 5.7704 | 154.73 | 399500 | 5.5306 |
| 5.8041 | 154.92 | 400000 | 5.4993 |
| 5.7729 | 155.11 | 400500 | 5.5061 |
| 5.7782 | 155.31 | 401000 | 5.4924 |
| 5.7788 | 155.5 | 401500 | 5.5045 |
| 5.7867 | 155.69 | 402000 | 5.5064 |
| 5.7453 | 155.89 | 402500 | 5.4588 |
| 5.7694 | 156.08 | 403000 | 5.4874 |
| 5.7495 | 156.27 | 403500 | 5.4519 |
| 5.7981 | 156.47 | 404000 | 5.5117 |
| 5.7725 | 156.66 | 404500 | 5.4655 |
| 5.7646 | 156.86 | 405000 | 5.4456 |
| 5.7733 | 157.05 | 405500 | 5.4685 |
| 5.7618 | 157.24 | 406000 | 5.4861 |
| 5.7747 | 157.44 | 406500 | 5.4771 |
| 5.742 | 157.63 | 407000 | 5.4824 |
| 5.7884 | 157.82 | 407500 | 5.4122 |
| 5.7312 | 158.02 | 408000 | 5.4824 |
| 5.7584 | 158.21 | 408500 | 5.5168 |
| 5.7494 | 158.4 | 409000 | 5.4527 |
| 5.7351 | 158.6 | 409500 | 5.4517 |
| 5.7571 | 158.79 | 410000 | 5.4462 |
| 5.7646 | 158.99 | 410500 | 5.4827 |
| 5.7448 | 159.18 | 411000 | 5.4191 |
| 5.7008 | 159.37 | 411500 | 5.5147 |
| 5.7455 | 159.57 | 412000 | 5.4602 |
| 5.7352 | 159.76 | 412500 | 5.4281 |
| 5.7438 | 159.95 | 413000 | 5.4478 |
| 5.7111 | 160.15 | 413500 | 5.4608 |
| 5.742 | 160.34 | 414000 | 5.4418 |
| 5.7541 | 160.53 | 414500 | 5.4423 |
| 5.7397 | 160.73 | 415000 | 5.4406 |
| 5.7393 | 160.92 | 415500 | 5.4741 |
| 5.7342 | 161.12 | 416000 | 5.4575 |
| 5.7198 | 161.31 | 416500 | 5.3906 |
| 5.691 | 161.5 | 417000 | 5.4405 |
| 5.7585 | 161.7 | 417500 | 5.4259 |
| 5.7279 | 161.89 | 418000 | 5.5081 |
| 5.7217 | 162.08 | 418500 | 5.3794 |
| 5.7452 | 162.28 | 419000 | 5.4250 |
| 5.7226 | 162.47 | 419500 | 5.4700 |
| 5.7482 | 162.66 | 420000 | 5.4034 |
| 5.7095 | 162.86 | 420500 | 5.4118 |
| 5.6917 | 163.05 | 421000 | 5.4417 |
| 5.7282 | 163.25 | 421500 | 5.4055 |
| 5.7171 | 163.44 | 422000 | 5.4351 |
| 5.7424 | 163.63 | 422500 | 5.4415 |
| 5.6961 | 163.83 | 423000 | 5.4633 |
| 5.7231 | 164.02 | 423500 | 5.4643 |
| 5.7365 | 164.21 | 424000 | 5.4110 |
| 5.7358 | 164.41 | 424500 | 5.4220 |
| 5.7008 | 164.6 | 425000 | 5.4246 |
| 5.7353 | 164.79 | 425500 | 5.3805 |
| 5.7047 | 164.99 | 426000 | 5.3864 |
| 5.701 | 165.18 | 426500 | 5.4106 |
| 5.7117 | 165.38 | 427000 | 5.4074 |
| 5.7173 | 165.57 | 427500 | 5.4123 |
| 5.7192 | 165.76 | 428000 | 5.3903 |
| 5.709 | 165.96 | 428500 | 5.4557 |
| 5.7064 | 166.15 | 429000 | 5.3853 |
| 5.6831 | 166.34 | 429500 | 5.4376 |
| 5.6873 | 166.54 | 430000 | 5.4053 |
| 5.6988 | 166.73 | 430500 | 5.4159 |
| 5.7169 | 166.92 | 431000 | 5.4370 |
| 5.7118 | 167.12 | 431500 | 5.3915 |
| 5.6992 | 167.31 | 432000 | 5.4012 |
| 5.6984 | 167.51 | 432500 | 5.3864 |
| 5.6991 | 167.7 | 433000 | 5.3968 |
| 5.7088 | 167.89 | 433500 | 5.4048 |
| 5.6914 | 168.09 | 434000 | 5.3965 |
| 5.6985 | 168.28 | 434500 | 5.4305 |
| 5.716 | 168.47 | 435000 | 5.4073 |
| 5.7114 | 168.67 | 435500 | 5.3939 |
| 5.6991 | 168.86 | 436000 | 5.4275 |
| 5.6844 | 169.05 | 436500 | 5.4270 |
| 5.6609 | 169.25 | 437000 | 5.3867 |
| 5.6984 | 169.44 | 437500 | 5.4050 |
| 5.6937 | 169.64 | 438000 | 5.3821 |
| 5.7043 | 169.83 | 438500 | 5.4297 |
| 5.7031 | 170.02 | 439000 | 5.4376 |
| 5.6958 | 170.22 | 439500 | 5.3795 |
| 5.658 | 170.41 | 440000 | 5.4534 |
| 5.6807 | 170.6 | 440500 | 5.4420 |
| 5.6979 | 170.8 | 441000 | 5.4005 |
| 5.6782 | 170.99 | 441500 | 5.3995 |
| 5.6872 | 171.19 | 442000 | 5.3994 |
| 5.6786 | 171.38 | 442500 | 5.3890 |
| 5.6815 | 171.57 | 443000 | 5.4163 |
| 5.6832 | 171.77 | 443500 | 5.4296 |
| 5.6833 | 171.96 | 444000 | 5.3816 |
| 5.6773 | 172.15 | 444500 | 5.3820 |
| 5.6489 | 172.35 | 445000 | 5.3720 |
| 5.6826 | 172.54 | 445500 | 5.3859 |
| 5.675 | 172.73 | 446000 | 5.3909 |
| 5.6678 | 172.93 | 446500 | 5.3636 |
| 5.6802 | 173.12 | 447000 | 5.3338 |
| 5.6882 | 173.32 | 447500 | 5.3822 |
| 5.6817 | 173.51 | 448000 | 5.3794 |
| 5.6744 | 173.7 | 448500 | 5.3187 |
| 5.6407 | 173.9 | 449000 | 5.3966 |
| 5.6389 | 174.09 | 449500 | 5.3547 |
| 5.6648 | 174.28 | 450000 | 5.3423 |
| 5.6576 | 174.48 | 450500 | 5.3684 |
| 5.6484 | 174.67 | 451000 | 5.3507 |
| 5.6705 | 174.86 | 451500 | 5.4060 |
| 5.6877 | 175.06 | 452000 | 5.3540 |
| 5.6768 | 175.25 | 452500 | 5.3535 |
| 5.6693 | 175.45 | 453000 | 5.3339 |
| 5.6294 | 175.64 | 453500 | 5.3484 |
| 5.6398 | 175.83 | 454000 | 5.3836 |
| 5.6617 | 176.03 | 454500 | 5.4004 |
| 5.6628 | 176.22 | 455000 | 5.3228 |
| 5.6707 | 176.41 | 455500 | 5.3083 |
| 5.6593 | 176.61 | 456000 | 5.3822 |
| 5.6522 | 176.8 | 456500 | 5.3683 |
| 5.6483 | 176.99 | 457000 | 5.3286 |
| 5.6352 | 177.19 | 457500 | 5.4293 |
| 5.6528 | 177.38 | 458000 | 5.3603 |
| 5.6591 | 177.58 | 458500 | 5.3808 |
| 5.6799 | 177.77 | 459000 | 5.4076 |
| 5.6485 | 177.96 | 459500 | 5.3092 |
| 5.6645 | 178.16 | 460000 | 5.3530 |
| 5.6401 | 178.35 | 460500 | 5.3411 |
| 5.6307 | 178.54 | 461000 | 5.3876 |
| 5.6338 | 178.74 | 461500 | 5.3084 |
| 5.6684 | 178.93 | 462000 | 5.3771 |
| 5.6684 | 179.12 | 462500 | 5.3206 |
| 5.6373 | 179.32 | 463000 | 5.3839 |
| 5.6817 | 179.51 | 463500 | 5.4119 |
| 5.6499 | 179.71 | 464000 | 5.3780 |
| 5.6542 | 179.9 | 464500 | 5.4049 |
| 5.6648 | 180.09 | 465000 | 5.2990 |
| 5.6531 | 180.29 | 465500 | 5.3401 |
| 5.6586 | 180.48 | 466000 | 5.4087 |
| 5.6261 | 180.67 | 466500 | 5.3383 |
| 5.6128 | 180.87 | 467000 | 5.3714 |
| 5.6704 | 181.06 | 467500 | 5.3260 |
| 5.6429 | 181.25 | 468000 | 5.3600 |
| 5.638 | 181.45 | 468500 | 5.3364 |
| 5.651 | 181.64 | 469000 | 5.4135 |
| 5.6448 | 181.84 | 469500 | 5.4075 |
| 5.6273 | 182.03 | 470000 | 5.3312 |
| 5.6459 | 182.22 | 470500 | 5.3315 |
| 5.6487 | 182.42 | 471000 | 5.3298 |
| 5.6669 | 182.61 | 471500 | 5.3472 |
| 5.6473 | 182.8 | 472000 | 5.3055 |
| 5.6281 | 183.0 | 472500 | 5.2734 |
| 5.6327 | 183.19 | 473000 | 5.3361 |
| 5.614 | 183.38 | 473500 | 5.3431 |
| 5.6216 | 183.58 | 474000 | 5.3655 |
| 5.6307 | 183.77 | 474500 | 5.3467 |
| 5.6411 | 183.97 | 475000 | 5.4350 |
| 5.6219 | 184.16 | 475500 | 5.3125 |
| 5.6226 | 184.35 | 476000 | 5.3687 |
| 5.6078 | 184.55 | 476500 | 5.3488 |
| 5.6096 | 184.74 | 477000 | 5.3533 |
| 5.6246 | 184.93 | 477500 | 5.3244 |
| 5.618 | 185.13 | 478000 | 5.3299 |
| 5.6114 | 185.32 | 478500 | 5.3263 |
| 5.5982 | 185.52 | 479000 | 5.3405 |
| 5.6245 | 185.71 | 479500 | 5.3282 |
| 5.6172 | 185.9 | 480000 | 5.3250 |
| 5.5996 | 186.1 | 480500 | 5.3614 |
| 5.65 | 186.29 | 481000 | 5.3115 |
| 5.6313 | 186.48 | 481500 | 5.3997 |
| 5.6252 | 186.68 | 482000 | 5.3107 |
| 5.6152 | 186.87 | 482500 | 5.2778 |
| 5.6237 | 187.06 | 483000 | 5.3143 |
| 5.6066 | 187.26 | 483500 | 5.2831 |
| 5.6261 | 187.45 | 484000 | 5.3489 |
| 5.6369 | 187.65 | 484500 | 5.3050 |
| 5.5793 | 187.84 | 485000 | 5.2617 |
| 5.6006 | 188.03 | 485500 | 5.2924 |
| 5.5963 | 188.23 | 486000 | 5.2961 |
| 5.6163 | 188.42 | 486500 | 5.3068 |
| 5.5976 | 188.61 | 487000 | 5.3241 |
| 5.6247 | 188.81 | 487500 | 5.3540 |
| 5.6252 | 189.0 | 488000 | 5.2798 |
| 5.5877 | 189.19 | 488500 | 5.3412 |
| 5.6068 | 189.39 | 489000 | 5.3222 |
| 5.6096 | 189.58 | 489500 | 5.3245 |
| 5.6141 | 189.78 | 490000 | 5.4048 |
| 5.6076 | 189.97 | 490500 | 5.3013 |
| 5.5593 | 190.16 | 491000 | 5.2765 |
| 5.5958 | 190.36 | 491500 | 5.3411 |
| 5.6028 | 190.55 | 492000 | 5.3543 |
| 5.5886 | 190.74 | 492500 | 5.3400 |
| 5.6006 | 190.94 | 493000 | 5.2841 |
| 5.5828 | 191.13 | 493500 | 5.3125 |
| 5.5995 | 191.32 | 494000 | 5.2710 |
| 5.585 | 191.52 | 494500 | 5.3224 |
| 5.6109 | 191.71 | 495000 | 5.3154 |
| 5.5949 | 191.91 | 495500 | 5.3213 |
| 5.5803 | 192.1 | 496000 | 5.3214 |
| 5.5996 | 192.29 | 496500 | 5.2980 |
| 5.5777 | 192.49 | 497000 | 5.3015 |
| 5.6193 | 192.68 | 497500 | 5.3166 |
| 5.624 | 192.87 | 498000 | 5.2569 |
| 5.5654 | 193.07 | 498500 | 5.2981 |
| 5.5593 | 193.26 | 499000 | 5.2812 |
| 5.5732 | 193.45 | 499500 | 5.2912 |
| 5.6158 | 193.65 | 500000 | 5.3224 |
| 5.6012 | 193.84 | 500500 | 5.3529 |
| 5.5906 | 194.04 | 501000 | 5.2782 |
| 5.5993 | 194.23 | 501500 | 5.2995 |
| 5.5731 | 194.42 | 502000 | 5.2697 |
| 5.5928 | 194.62 | 502500 | 5.2955 |
| 5.5777 | 194.81 | 503000 | 5.2641 |
| 5.5753 | 195.0 | 503500 | 5.3061 |
| 5.6029 | 195.2 | 504000 | 5.3681 |
| 5.563 | 195.39 | 504500 | 5.3171 |
| 5.6065 | 195.58 | 505000 | 5.3106 |
| 5.574 | 195.78 | 505500 | 5.3547 |
| 5.5759 | 195.97 | 506000 | 5.2560 |
| 5.5704 | 196.17 | 506500 | 5.3061 |
| 5.5619 | 196.36 | 507000 | 5.3233 |
| 5.5876 | 196.55 | 507500 | 5.2826 |
| 5.5849 | 196.75 | 508000 | 5.3096 |
| 5.5938 | 196.94 | 508500 | 5.2849 |
| 5.5666 | 197.13 | 509000 | 5.3538 |
| 5.5784 | 197.33 | 509500 | 5.2532 |
| 5.5893 | 197.52 | 510000 | 5.2387 |
| 5.5556 | 197.71 | 510500 | 5.2909 |
| 5.5741 | 197.91 | 511000 | 5.4365 |
| 5.5713 | 198.1 | 511500 | 5.2402 |
| 5.5583 | 198.3 | 512000 | 5.3146 |
| 5.5669 | 198.49 | 512500 | 5.2166 |
| 5.5523 | 198.68 | 513000 | 5.3176 |
| 5.5626 | 198.88 | 513500 | 5.3053 |
| 5.5788 | 199.07 | 514000 | 5.2880 |
| 5.5682 | 199.26 | 514500 | 5.2790 |
| 5.5499 | 199.46 | 515000 | 5.2771 |
| 5.5783 | 199.65 | 515500 | 5.2516 |
| 5.5425 | 199.85 | 516000 | 5.3402 |
| 5.5472 | 200.04 | 516500 | 5.2679 |
| 5.5628 | 200.23 | 517000 | 5.2623 |
| 5.5635 | 200.43 | 517500 | 5.2496 |
| 5.5645 | 200.62 | 518000 | 5.2267 |
| 5.5567 | 200.81 | 518500 | 5.3454 |
| 5.5591 | 201.01 | 519000 | 5.2430 |
| 5.5729 | 201.2 | 519500 | 5.2992 |
| 5.582 | 201.39 | 520000 | 5.2823 |
| 5.5528 | 201.59 | 520500 | 5.3184 |
| 5.5392 | 201.78 | 521000 | 5.2932 |
| 5.5632 | 201.98 | 521500 | 5.2308 |
| 5.5294 | 202.17 | 522000 | 5.2836 |
| 5.5385 | 202.36 | 522500 | 5.2770 |
| 5.5388 | 202.56 | 523000 | 5.2804 |
| 5.5681 | 202.75 | 523500 | 5.2253 |
| 5.5716 | 202.94 | 524000 | 5.2818 |
| 5.5572 | 203.14 | 524500 | 5.2616 |
| 5.5505 | 203.33 | 525000 | 5.2558 |
| 5.5573 | 203.52 | 525500 | 5.3141 |
| 5.545 | 203.72 | 526000 | 5.2502 |
| 5.5549 | 203.91 | 526500 | 5.2166 |
| 5.5498 | 204.11 | 527000 | 5.2486 |
| 5.5372 | 204.3 | 527500 | 5.2524 |
| 5.5337 | 204.49 | 528000 | 5.2573 |
| 5.5462 | 204.69 | 528500 | 5.2399 |
| 5.5371 | 204.88 | 529000 | 5.2402 |
| 5.5804 | 205.07 | 529500 | 5.2804 |
| 5.5265 | 205.27 | 530000 | 5.2506 |
| 5.5631 | 205.46 | 530500 | 5.2290 |
| 5.5643 | 205.65 | 531000 | 5.2431 |
| 5.5289 | 205.85 | 531500 | 5.2717 |
| 5.5462 | 206.04 | 532000 | 5.2784 |
| 5.5364 | 206.24 | 532500 | 5.3275 |
| 5.5203 | 206.43 | 533000 | 5.3078 |
| 5.5612 | 206.62 | 533500 | 5.2713 |
| 5.5461 | 206.82 | 534000 | 5.2105 |
| 5.4844 | 207.01 | 534500 | 5.2427 |
| 5.5281 | 207.2 | 535000 | 5.2753 |
| 5.5524 | 207.4 | 535500 | 5.2430 |
| 5.5413 | 207.59 | 536000 | 5.2350 |
| 5.5157 | 207.78 | 536500 | 5.2656 |
| 5.538 | 207.98 | 537000 | 5.2013 |
| 5.5398 | 208.17 | 537500 | 5.2710 |
| 5.536 | 208.37 | 538000 | 5.2514 |
| 5.5077 | 208.56 | 538500 | 5.2851 |
| 5.5267 | 208.75 | 539000 | 5.2317 |
| 5.5379 | 208.95 | 539500 | 5.2661 |
| 5.5261 | 209.14 | 540000 | 5.2653 |
| 5.5028 | 209.33 | 540500 | 5.2561 |
| 5.5209 | 209.53 | 541000 | 5.2058 |
| 5.4972 | 209.72 | 541500 | 5.2360 |
| 5.5079 | 209.91 | 542000 | 5.1901 |
| 5.4981 | 210.11 | 542500 | 5.2492 |
| 5.542 | 210.3 | 543000 | 5.2457 |
| 5.5527 | 210.5 | 543500 | 5.2126 |
| 5.5133 | 210.69 | 544000 | 5.2157 |
| 5.5217 | 210.88 | 544500 | 5.2405 |
| 5.5288 | 211.08 | 545000 | 5.2562 |
| 5.5165 | 211.27 | 545500 | 5.2422 |
| 5.524 | 211.46 | 546000 | 5.2168 |
| 5.5541 | 211.66 | 546500 | 5.1961 |
| 5.514 | 211.85 | 547000 | 5.2531 |
| 5.5246 | 212.04 | 547500 | 5.2418 |
| 5.4989 | 212.24 | 548000 | 5.2581 |
| 5.4825 | 212.43 | 548500 | 5.1648 |
| 5.5009 | 212.63 | 549000 | 5.1800 |
| 5.5621 | 212.82 | 549500 | 5.2023 |
| 5.5356 | 213.01 | 550000 | 5.2142 |
| 5.4894 | 213.21 | 550500 | 5.2415 |
| 5.5265 | 213.4 | 551000 | 5.1678 |
| 5.5408 | 213.59 | 551500 | 5.1895 |
| 5.5226 | 213.79 | 552000 | 5.2287 |
| 5.5282 | 213.98 | 552500 | 5.2413 |
| 5.4997 | 214.18 | 553000 | 5.2408 |
| 5.5177 | 214.37 | 553500 | 5.1881 |
| 5.5186 | 214.56 | 554000 | 5.2222 |
| 5.5227 | 214.76 | 554500 | 5.2009 |
| 5.5002 | 214.95 | 555000 | 5.2383 |
| 5.5174 | 215.14 | 555500 | 5.2386 |
| 5.5308 | 215.34 | 556000 | 5.1832 |
| 5.4914 | 215.53 | 556500 | 5.2360 |
| 5.4864 | 215.72 | 557000 | 5.1961 |
| 5.5116 | 215.92 | 557500 | 5.2403 |
| 5.5065 | 216.11 | 558000 | 5.2019 |
| 5.4919 | 216.31 | 558500 | 5.2194 |
| 5.519 | 216.5 | 559000 | 5.2472 |
| 5.5075 | 216.69 | 559500 | 5.2192 |
| 5.5181 | 216.89 | 560000 | 5.2218 |
| 5.5015 | 217.08 | 560500 | 5.2167 |
| 5.487 | 217.27 | 561000 | 5.2329 |
| 5.5179 | 217.47 | 561500 | 5.2464 |
| 5.4807 | 217.66 | 562000 | 5.2115 |
| 5.4998 | 217.85 | 562500 | 5.2462 |
| 5.5032 | 218.05 | 563000 | 5.2216 |
| 5.5031 | 218.24 | 563500 | 5.2147 |
| 5.5083 | 218.44 | 564000 | 5.2162 |
| 5.5038 | 218.63 | 564500 | 5.1412 |
| 5.4659 | 218.82 | 565000 | 5.2629 |
| 5.4794 | 219.02 | 565500 | 5.2163 |
| 5.4744 | 219.21 | 566000 | 5.1878 |
| 5.5054 | 219.4 | 566500 | 5.2107 |
| 5.4841 | 219.6 | 567000 | 5.2308 |
| 5.4891 | 219.79 | 567500 | 5.2575 |
| 5.4531 | 219.98 | 568000 | 5.1906 |
| 5.4901 | 220.18 | 568500 | 5.1901 |
| 5.4622 | 220.37 | 569000 | 5.2440 |
| 5.4799 | 220.57 | 569500 | 5.2478 |
| 5.4893 | 220.76 | 570000 | 5.1878 |
| 5.4961 | 220.95 | 570500 | 5.2147 |
| 5.508 | 221.15 | 571000 | 5.2494 |
| 5.4665 | 221.34 | 571500 | 5.2317 |
| 5.473 | 221.53 | 572000 | 5.2471 |
| 5.4754 | 221.73 | 572500 | 5.2230 |
| 5.4629 | 221.92 | 573000 | 5.2310 |
| 5.4941 | 222.11 | 573500 | 5.2487 |
| 5.5063 | 222.31 | 574000 | 5.1748 |
| 5.5031 | 222.5 | 574500 | 5.2017 |
| 5.4775 | 222.7 | 575000 | 5.1819 |
| 5.477 | 222.89 | 575500 | 5.2201 |
| 5.4974 | 223.08 | 576000 | 5.1915 |
| 5.471 | 223.28 | 576500 | 5.1601 |
| 5.4968 | 223.47 | 577000 | 5.1940 |
| 5.4802 | 223.66 | 577500 | 5.2094 |
| 5.4807 | 223.86 | 578000 | 5.2069 |
| 5.4802 | 224.05 | 578500 | 5.2246 |
| 5.4408 | 224.24 | 579000 | 5.1933 |
| 5.4635 | 224.44 | 579500 | 5.2526 |
| 5.4835 | 224.63 | 580000 | 5.1989 |
| 5.4697 | 224.83 | 580500 | 5.2130 |
| 5.4673 | 225.02 | 581000 | 5.2051 |
| 5.4653 | 225.21 | 581500 | 5.1684 |
| 5.4683 | 225.41 | 582000 | 5.2201 |
| 5.4597 | 225.6 | 582500 | 5.1634 |
| 5.4624 | 225.79 | 583000 | 5.1864 |
| 5.4818 | 225.99 | 583500 | 5.1758 |
| 5.4521 | 226.18 | 584000 | 5.2370 |
| 5.4829 | 226.37 | 584500 | 5.2197 |
| 5.4561 | 226.57 | 585000 | 5.1673 |
| 5.4604 | 226.76 | 585500 | 5.1525 |
| 5.4836 | 226.96 | 586000 | 5.2036 |
| 5.4556 | 227.15 | 586500 | 5.1597 |
| 5.4375 | 227.34 | 587000 | 5.1354 |
| 5.4542 | 227.54 | 587500 | 5.2094 |
| 5.4633 | 227.73 | 588000 | 5.1696 |
| 5.4631 | 227.92 | 588500 | 5.1048 |
| 5.4789 | 228.12 | 589000 | 5.1532 |
| 5.4708 | 228.31 | 589500 | 5.1899 |
| 5.4747 | 228.51 | 590000 | 5.2007 |
| 5.4562 | 228.7 | 590500 | 5.1649 |
| 5.4412 | 228.89 | 591000 | 5.1794 |
| 5.477 | 229.09 | 591500 | 5.1865 |
| 5.4415 | 229.28 | 592000 | 5.1394 |
| 5.4898 | 229.47 | 592500 | 5.1865 |
| 5.4986 | 229.67 | 593000 | 5.1977 |
| 5.4623 | 229.86 | 593500 | 5.1879 |
| 5.444 | 230.05 | 594000 | 5.1844 |
| 5.4514 | 230.25 | 594500 | 5.2079 |
| 5.4847 | 230.44 | 595000 | 5.2058 |
| 5.4936 | 230.64 | 595500 | 5.2204 |
| 5.4266 | 230.83 | 596000 | 5.1847 |
| 5.4596 | 231.02 | 596500 | 5.1775 |
| 5.4662 | 231.22 | 597000 | 5.2368 |
| 5.4447 | 231.41 | 597500 | 5.1629 |
| 5.4276 | 231.6 | 598000 | 5.0777 |
| 5.4758 | 231.8 | 598500 | 5.1242 |
| 5.4492 | 231.99 | 599000 | 5.1298 |
| 5.4386 | 232.18 | 599500 | 5.1472 |
| 5.4425 | 232.38 | 600000 | 5.1869 |
| 5.4525 | 232.57 | 600500 | 5.1746 |
| 5.4361 | 232.77 | 601000 | 5.1657 |
| 5.4606 | 232.96 | 601500 | 5.1502 |
| 5.4587 | 233.15 | 602000 | 5.1334 |
| 5.4491 | 233.35 | 602500 | 5.1452 |
| 5.4599 | 233.54 | 603000 | 5.1541 |
| 5.4692 | 233.73 | 603500 | 5.1343 |
| 5.4423 | 233.93 | 604000 | 5.1430 |
| 5.4387 | 234.12 | 604500 | 5.1566 |
| 5.4616 | 234.31 | 605000 | 5.1718 |
| 5.4678 | 234.51 | 605500 | 5.1338 |
| 5.3934 | 234.7 | 606000 | 5.1227 |
| 5.4454 | 234.9 | 606500 | 5.1688 |
| 5.4402 | 235.09 | 607000 | 5.1094 |
| 5.4294 | 235.28 | 607500 | 5.1227 |
| 5.448 | 235.48 | 608000 | 5.1407 |
| 5.4416 | 235.67 | 608500 | 5.1410 |
| 5.4617 | 235.86 | 609000 | 5.1206 |
| 5.4332 | 236.06 | 609500 | 5.1739 |
| 5.4195 | 236.25 | 610000 | 5.1671 |
| 5.4506 | 236.44 | 610500 | 5.1708 |
| 5.4235 | 236.64 | 611000 | 5.1622 |
| 5.4558 | 236.83 | 611500 | 5.1731 |
| 5.4344 | 237.03 | 612000 | 5.1368 |
| 5.4159 | 237.22 | 612500 | 5.1689 |
| 5.435 | 237.41 | 613000 | 5.1383 |
| 5.4408 | 237.61 | 613500 | 5.1235 |
| 5.416 | 237.8 | 614000 | 5.1519 |
| 5.4317 | 237.99 | 614500 | 5.1538 |
| 5.4444 | 238.19 | 615000 | 5.1710 |
| 5.4177 | 238.38 | 615500 | 5.1571 |
| 5.4352 | 238.57 | 616000 | 5.1401 |
| 5.4216 | 238.77 | 616500 | 5.1795 |
| 5.4412 | 238.96 | 617000 | 5.1101 |
| 5.4403 | 239.16 | 617500 | 5.1405 |
| 5.4694 | 239.35 | 618000 | 5.1463 |
| 5.4101 | 239.54 | 618500 | 5.1289 |
| 5.4316 | 239.74 | 619000 | 5.1274 |
| 5.4291 | 239.93 | 619500 | 5.1681 |
| 5.4204 | 240.12 | 620000 | 5.1824 |
| 5.4092 | 240.32 | 620500 | 5.1620 |
| 5.4151 | 240.51 | 621000 | 5.1428 |
| 5.4235 | 240.7 | 621500 | 5.1342 |
| 5.4342 | 240.9 | 622000 | 5.1091 |
| 5.4166 | 241.09 | 622500 | 5.1483 |
| 5.4166 | 241.29 | 623000 | 5.1497 |
| 5.3939 | 241.48 | 623500 | 5.1323 |
| 5.4253 | 241.67 | 624000 | 5.1281 |
| 5.3985 | 241.87 | 624500 | 5.1087 |
| 5.4103 | 242.06 | 625000 | 5.1538 |
| 5.4106 | 242.25 | 625500 | 5.1367 |
| 5.4258 | 242.45 | 626000 | 5.0969 |
| 5.434 | 242.64 | 626500 | 5.1474 |
| 5.4158 | 242.84 | 627000 | 5.0803 |
| 5.4053 | 243.03 | 627500 | 5.1300 |
| 5.4355 | 243.22 | 628000 | 5.1774 |
| 5.4214 | 243.42 | 628500 | 5.1289 |
| 5.3964 | 243.61 | 629000 | 5.1782 |
| 5.4092 | 243.8 | 629500 | 5.1291 |
| 5.3865 | 244.0 | 630000 | 5.2033 |
| 5.415 | 244.19 | 630500 | 5.1307 |
| 5.4053 | 244.38 | 631000 | 5.1285 |
| 5.4083 | 244.58 | 631500 | 5.1260 |
| 5.4308 | 244.77 | 632000 | 5.1111 |
| 5.4088 | 244.97 | 632500 | 5.1473 |
| 5.404 | 245.16 | 633000 | 5.1695 |
| 5.4006 | 245.35 | 633500 | 5.1438 |
| 5.3848 | 245.55 | 634000 | 5.1529 |
| 5.4202 | 245.74 | 634500 | 5.1223 |
| 5.4029 | 245.93 | 635000 | 5.0946 |
| 5.3855 | 246.13 | 635500 | 5.1392 |
| 5.4303 | 246.32 | 636000 | 5.1367 |
| 5.4033 | 246.51 | 636500 | 5.1017 |
| 5.4325 | 246.71 | 637000 | 5.1393 |
| 5.4134 | 246.9 | 637500 | 5.1543 |
| 5.3986 | 247.1 | 638000 | 5.1309 |
| 5.3746 | 247.29 | 638500 | 5.1322 |
| 5.4197 | 247.48 | 639000 | 5.1160 |
| 5.4235 | 247.68 | 639500 | 5.1321 |
| 5.3706 | 247.87 | 640000 | 5.1676 |
| 5.4018 | 248.06 | 640500 | 5.1096 |
| 5.3822 | 248.26 | 641000 | 5.0967 |
| 5.4332 | 248.45 | 641500 | 5.1486 |
| 5.3951 | 248.64 | 642000 | 5.1048 |
| 5.3899 | 248.84 | 642500 | 5.1297 |
| 5.3887 | 249.03 | 643000 | 5.1264 |
| 5.3808 | 249.23 | 643500 | 5.1108 |
| 5.3934 | 249.42 | 644000 | 5.1363 |
| 5.4008 | 249.61 | 644500 | 5.1109 |
| 5.4168 | 249.81 | 645000 | 5.1005 |
| 5.3844 | 250.0 | 645500 | 5.1302 |
| 5.396 | 250.19 | 646000 | 5.1385 |
| 5.4019 | 250.39 | 646500 | 5.1112 |
| 5.3883 | 250.58 | 647000 | 5.1359 |
| 5.3982 | 250.77 | 647500 | 5.1295 |
| 5.3858 | 250.97 | 648000 | 5.1397 |
| 5.4064 | 251.16 | 648500 | 5.1076 |
| 5.3845 | 251.36 | 649000 | 5.1030 |
| 5.3977 | 251.55 | 649500 | 5.1283 |
| 5.3936 | 251.74 | 650000 | 5.0607 |
| 5.3917 | 251.94 | 650500 | 5.1286 |
| 5.3857 | 252.13 | 651000 | 5.1203 |
| 5.4092 | 252.32 | 651500 | 5.0867 |
| 5.3949 | 252.52 | 652000 | 5.0936 |
| 5.3909 | 252.71 | 652500 | 5.1033 |
| 5.3748 | 252.9 | 653000 | 5.1448 |
| 5.36 | 253.1 | 653500 | 5.1007 |
| 5.4047 | 253.29 | 654000 | 5.1083 |
| 5.3664 | 253.49 | 654500 | 5.1111 |
| 5.3728 | 253.68 | 655000 | 5.1023 |
| 5.3863 | 253.87 | 655500 | 5.0889 |
| 5.3781 | 254.07 | 656000 | 5.0758 |
| 5.384 | 254.26 | 656500 | 5.0883 |
| 5.3748 | 254.45 | 657000 | 5.1066 |
| 5.4297 | 254.65 | 657500 | 5.0840 |
| 5.3763 | 254.84 | 658000 | 5.0740 |
| 5.3915 | 255.03 | 658500 | 5.0531 |
| 5.401 | 255.23 | 659000 | 5.1152 |
| 5.4052 | 255.42 | 659500 | 5.1129 |
| 5.4131 | 255.62 | 660000 | 5.1075 |
| 5.3829 | 255.81 | 660500 | 5.1153 |
| 5.3764 | 256.0 | 661000 | 5.1075 |
| 5.3757 | 256.2 | 661500 | 5.1077 |
| 5.3944 | 256.39 | 662000 | 5.1051 |
| 5.3688 | 256.58 | 662500 | 5.0953 |
| 5.4085 | 256.78 | 663000 | 5.1339 |
| 5.3561 | 256.97 | 663500 | 5.0772 |
| 5.3754 | 257.16 | 664000 | 5.1090 |
| 5.407 | 257.36 | 664500 | 5.1180 |
| 5.3627 | 257.55 | 665000 | 5.1054 |
| 5.3866 | 257.75 | 665500 | 5.1373 |
| 5.3599 | 257.94 | 666000 | 5.0439 |
| 5.3825 | 258.13 | 666500 | 5.0759 |
| 5.3584 | 258.33 | 667000 | 5.1097 |
| 5.3478 | 258.52 | 667500 | 5.1463 |
| 5.3608 | 258.71 | 668000 | 5.1012 |
| 5.4128 | 258.91 | 668500 | 5.1192 |
| 5.378 | 259.1 | 669000 | 5.0897 |
| 5.3831 | 259.3 | 669500 | 5.1095 |
| 5.3687 | 259.49 | 670000 | 5.0835 |
| 5.3658 | 259.68 | 670500 | 5.0947 |
| 5.3531 | 259.88 | 671000 | 5.0795 |
| 5.3745 | 260.07 | 671500 | 5.1075 |
| 5.4171 | 260.26 | 672000 | 5.1051 |
| 5.3669 | 260.46 | 672500 | 5.1055 |
| 5.4015 | 260.65 | 673000 | 5.1121 |
| 5.3423 | 260.84 | 673500 | 5.1391 |
| 5.3811 | 261.04 | 674000 | 5.0921 |
| 5.3607 | 261.23 | 674500 | 5.1021 |
| 5.3556 | 261.43 | 675000 | 5.0886 |
| 5.3887 | 261.62 | 675500 | 5.0489 |
| 5.3793 | 261.81 | 676000 | 5.1188 |
| 5.3871 | 262.01 | 676500 | 5.1047 |
| 5.3597 | 262.2 | 677000 | 5.1699 |
| 5.3839 | 262.39 | 677500 | 5.0961 |
| 5.3735 | 262.59 | 678000 | 5.1041 |
| 5.3725 | 262.78 | 678500 | 5.0690 |
| 5.3593 | 262.97 | 679000 | 5.0925 |
| 5.3571 | 263.17 | 679500 | 5.0774 |
| 5.3717 | 263.36 | 680000 | 5.1172 |
| 5.3609 | 263.56 | 680500 | 5.0873 |
| 5.3773 | 263.75 | 681000 | 5.1073 |
| 5.381 | 263.94 | 681500 | 5.0893 |
| 5.3406 | 264.14 | 682000 | 5.0634 |
| 5.383 | 264.33 | 682500 | 5.0769 |
| 5.3703 | 264.52 | 683000 | 5.0812 |
| 5.3568 | 264.72 | 683500 | 5.0918 |
| 5.3321 | 264.91 | 684000 | 5.1248 |
| 5.3735 | 265.1 | 684500 | 5.0733 |
| 5.3796 | 265.3 | 685000 | 5.0809 |
| 5.3352 | 265.49 | 685500 | 5.1017 |
| 5.3727 | 265.69 | 686000 | 5.0930 |
| 5.3333 | 265.88 | 686500 | 5.0893 |
| 5.3516 | 266.07 | 687000 | 5.1134 |
| 5.3768 | 266.27 | 687500 | 5.0761 |
| 5.3685 | 266.46 | 688000 | 5.0557 |
| 5.3604 | 266.65 | 688500 | 5.0616 |
| 5.3663 | 266.85 | 689000 | 5.0996 |
| 5.3756 | 267.04 | 689500 | 5.0806 |
| 5.3703 | 267.23 | 690000 | 5.0482 |
| 5.3772 | 267.43 | 690500 | 5.0874 |
| 5.3504 | 267.62 | 691000 | 5.0664 |
| 5.3695 | 267.82 | 691500 | 5.0752 |
| 5.3701 | 268.01 | 692000 | 5.0659 |
| 5.3811 | 268.2 | 692500 | 5.1069 |
| 5.3568 | 268.4 | 693000 | 5.0801 |
| 5.3752 | 268.59 | 693500 | 5.0727 |
| 5.3718 | 268.78 | 694000 | 5.0704 |
| 5.3419 | 268.98 | 694500 | 5.0735 |
| 5.3343 | 269.17 | 695000 | 5.0845 |
| 5.3348 | 269.36 | 695500 | 5.0549 |
| 5.3558 | 269.56 | 696000 | 5.0596 |
| 5.3729 | 269.75 | 696500 | 5.0374 |
| 5.3514 | 269.95 | 697000 | 5.0976 |
| 5.36 | 270.14 | 697500 | 5.0621 |
| 5.3763 | 270.33 | 698000 | 5.0889 |
| 5.3516 | 270.53 | 698500 | 5.0927 |
| 5.3824 | 270.72 | 699000 | 5.0988 |
| 5.3635 | 270.91 | 699500 | 5.0921 |
| 5.3366 | 271.11 | 700000 | 5.0688 |
| 5.358 | 271.3 | 700500 | 5.0585 |
| 5.37 | 271.49 | 701000 | 5.0990 |
| 5.3629 | 271.69 | 701500 | 5.1258 |
| 5.347 | 271.88 | 702000 | 5.0644 |
| 5.3331 | 272.08 | 702500 | 5.0988 |
| 5.3516 | 272.27 | 703000 | 5.0773 |
| 5.3345 | 272.46 | 703500 | 5.0567 |
| 5.3495 | 272.66 | 704000 | 5.1025 |
| 5.3315 | 272.85 | 704500 | 5.0231 |
| 5.3698 | 273.04 | 705000 | 5.0677 |
| 5.347 | 273.24 | 705500 | 5.0602 |
| 5.3708 | 273.43 | 706000 | 5.0575 |
| 5.3065 | 273.63 | 706500 | 5.0442 |
| 5.3453 | 273.82 | 707000 | 5.0758 |
| 5.3408 | 274.01 | 707500 | 5.0838 |
| 5.3429 | 274.21 | 708000 | 5.0919 |
| 5.342 | 274.4 | 708500 | 5.0556 |
| 5.3612 | 274.59 | 709000 | 5.0716 |
| 5.3666 | 274.79 | 709500 | 5.0837 |
| 5.3473 | 274.98 | 710000 | 5.0536 |
| 5.3684 | 275.17 | 710500 | 5.0759 |
| 5.3545 | 275.37 | 711000 | 5.0618 |
| 5.3424 | 275.56 | 711500 | 5.0807 |
| 5.3489 | 275.76 | 712000 | 5.0750 |
| 5.3409 | 275.95 | 712500 | 5.0264 |
| 5.3136 | 276.14 | 713000 | 5.0516 |
| 5.3393 | 276.34 | 713500 | 5.0836 |
| 5.3348 | 276.53 | 714000 | 5.0567 |
| 5.3743 | 276.72 | 714500 | 5.0857 |
| 5.3356 | 276.92 | 715000 | 5.0667 |
| 5.3431 | 277.11 | 715500 | 5.0481 |
| 5.3539 | 277.3 | 716000 | 5.0604 |
| 5.3587 | 277.5 | 716500 | 5.0900 |
| 5.3671 | 277.69 | 717000 | 5.0950 |
| 5.3414 | 277.89 | 717500 | 5.0792 |
| 5.3247 | 278.08 | 718000 | 5.0677 |
| 5.348 | 278.27 | 718500 | 5.0357 |
| 5.3521 | 278.47 | 719000 | 5.0454 |
| 5.3353 | 278.66 | 719500 | 5.0591 |
| 5.3691 | 278.85 | 720000 | 5.0540 |
| 5.3516 | 279.05 | 720500 | 5.0605 |
| 5.3626 | 279.24 | 721000 | 5.0448 |
| 5.3586 | 279.43 | 721500 | 5.0610 |
| 5.3456 | 279.63 | 722000 | 5.0509 |
| 5.3334 | 279.82 | 722500 | 5.0505 |
| 5.3487 | 280.02 | 723000 | 5.0647 |
| 5.3585 | 280.21 | 723500 | 5.0700 |
| 5.3031 | 280.4 | 724000 | 5.0509 |
| 5.3425 | 280.6 | 724500 | 5.0527 |
| 5.3564 | 280.79 | 725000 | 5.0422 |
| 5.3275 | 280.98 | 725500 | 5.0818 |
| 5.3389 | 281.18 | 726000 | 5.0567 |
| 5.3327 | 281.37 | 726500 | 5.0413 |
| 5.3321 | 281.56 | 727000 | 5.0821 |
| 5.3523 | 281.76 | 727500 | 5.0261 |
| 5.3471 | 281.95 | 728000 | 5.0301 |
| 5.3497 | 282.15 | 728500 | 5.0944 |
| 5.3607 | 282.34 | 729000 | 5.0698 |
| 5.3229 | 282.53 | 729500 | 5.0782 |
| 5.3291 | 282.73 | 730000 | 5.0224 |
| 5.3465 | 282.92 | 730500 | 5.0285 |
| 5.3333 | 283.11 | 731000 | 5.0422 |
| 5.3303 | 283.31 | 731500 | 5.0738 |
| 5.344 | 283.5 | 732000 | 5.0664 |
| 5.3354 | 283.69 | 732500 | 5.0302 |
| 5.3657 | 283.89 | 733000 | 5.0333 |
| 5.3483 | 284.08 | 733500 | 5.0612 |
| 5.336 | 284.28 | 734000 | 5.0713 |
| 5.3131 | 284.47 | 734500 | 5.0794 |
| 5.3473 | 284.66 | 735000 | 5.0451 |
| 5.3139 | 284.86 | 735500 | 5.0408 |
| 5.3561 | 285.05 | 736000 | 5.0525 |
| 5.3515 | 285.24 | 736500 | 5.0468 |
| 5.3405 | 285.44 | 737000 | 5.0607 |
| 5.3363 | 285.63 | 737500 | 5.0528 |
| 5.3144 | 285.82 | 738000 | 5.0766 |
| 5.3563 | 286.02 | 738500 | 5.0321 |
| 5.3151 | 286.21 | 739000 | 5.0005 |
| 5.3374 | 286.41 | 739500 | 5.0595 |
| 5.3336 | 286.6 | 740000 | 5.0523 |
| 5.3383 | 286.79 | 740500 | 5.0394 |
| 5.3445 | 286.99 | 741000 | 5.0588 |
| 5.3431 | 287.18 | 741500 | 5.0369 |
| 5.3277 | 287.37 | 742000 | 5.0628 |
| 5.3357 | 287.57 | 742500 | 5.0469 |
| 5.3348 | 287.76 | 743000 | 5.0368 |
| 5.3445 | 287.96 | 743500 | 5.0085 |
| 5.3292 | 288.15 | 744000 | 5.0724 |
| 5.3213 | 288.34 | 744500 | 5.0137 |
| 5.3251 | 288.54 | 745000 | 5.0576 |
| 5.3222 | 288.73 | 745500 | 5.0740 |
| 5.3121 | 288.92 | 746000 | 5.0114 |
| 5.3232 | 289.12 | 746500 | 5.0531 |
| 5.3315 | 289.31 | 747000 | 5.0426 |
| 5.3392 | 289.5 | 747500 | 5.0531 |
| 5.3187 | 289.7 | 748000 | 5.0661 |
| 5.3701 | 289.89 | 748500 | 5.0260 |
| 5.3446 | 290.09 | 749000 | 5.0125 |
| 5.3465 | 290.28 | 749500 | 5.0423 |
| 5.3283 | 290.47 | 750000 | 5.0366 |
| 5.338 | 290.67 | 750500 | 5.0667 |
| 5.2954 | 290.86 | 751000 | 5.0613 |
| 5.3194 | 291.05 | 751500 | 5.0521 |
| 5.3367 | 291.25 | 752000 | 5.0795 |
| 5.3469 | 291.44 | 752500 | 5.0709 |
| 5.3262 | 291.63 | 753000 | 5.0545 |
| 5.3107 | 291.83 | 753500 | 5.0195 |
| 5.3104 | 292.02 | 754000 | 5.0633 |
| 5.343 | 292.22 | 754500 | 5.0673 |
| 5.3171 | 292.41 | 755000 | 5.0391 |
| 5.344 | 292.6 | 755500 | 5.0445 |
| 5.3257 | 292.8 | 756000 | 5.0666 |
| 5.3102 | 292.99 | 756500 | 5.0197 |
| 5.3254 | 293.18 | 757000 | 5.0403 |
| 5.3494 | 293.38 | 757500 | 5.0233 |
| 5.3615 | 293.57 | 758000 | 5.0868 |
| 5.2934 | 293.76 | 758500 | 5.0730 |
| 5.3434 | 293.96 | 759000 | 5.0714 |
| 5.3512 | 294.15 | 759500 | 5.0396 |
| 5.3311 | 294.35 | 760000 | 5.0887 |
| 5.3422 | 294.54 | 760500 | 5.0571 |
| 5.3067 | 294.73 | 761000 | 5.0656 |
| 5.3382 | 294.93 | 761500 | 5.0728 |
| 5.3367 | 295.12 | 762000 | 5.0628 |
| 5.3343 | 295.31 | 762500 | 5.0472 |
| 5.3154 | 295.51 | 763000 | 5.0429 |
| 5.3099 | 295.7 | 763500 | 5.0384 |
| 5.3299 | 295.89 | 764000 | 5.0563 |
| 5.312 | 296.09 | 764500 | 5.0682 |
| 5.3282 | 296.28 | 765000 | 5.0360 |
| 5.3336 | 296.48 | 765500 | 5.0175 |
| 5.3495 | 296.67 | 766000 | 5.0728 |
| 5.3393 | 296.86 | 766500 | 5.0527 |
| 5.3478 | 297.06 | 767000 | 5.0398 |
| 5.3249 | 297.25 | 767500 | 5.0344 |
| 5.3217 | 297.44 | 768000 | 5.0458 |
| 5.3291 | 297.64 | 768500 | 5.1057 |
| 5.3253 | 297.83 | 769000 | 5.0360 |
| 5.3124 | 298.02 | 769500 | 5.0854 |
| 5.3029 | 298.22 | 770000 | 5.0250 |
| 5.3263 | 298.41 | 770500 | 5.0399 |
| 5.325 | 298.61 | 771000 | 5.0587 |
| 5.3315 | 298.8 | 771500 | 5.0548 |
| 5.2862 | 298.99 | 772000 | 5.0644 |
| 5.3218 | 299.19 | 772500 | 5.0562 |
| 5.3233 | 299.38 | 773000 | 5.0442 |
| 5.3001 | 299.57 | 773500 | 5.0263 |
| 5.334 | 299.77 | 774000 | 5.0736 |
| 5.327 | 299.96 | 774500 | 5.0648 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
ChoboAvenger/DialoGPT-small-DocBot | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-21T19:49:18Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-12k
- wit-400m
---
# Model card for vit_base_patch16_clip_224.openai_ft_in12k
A Vision Transformer (ViT) image classification model. Pretrained on WIT-400M image-text pairs by OpenAI using CLIP. Fine-tuned on ImageNet-12k in `timm`. See recipes in [Reproducible scaling laws](https://arxiv.org/abs/2212.07143).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 94.9
- GMACs: 16.9
- Activations (M): 16.5
- Image size: 224 x 224
- **Papers:**
- Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020
- Reproducible scaling laws for contrastive language-image learning: https://arxiv.org/abs/2212.07143
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-12k
- **Pretrain Dataset:**
- WIT-400M
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_base_patch16_clip_224.openai_ft_in12k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_patch16_clip_224.openai_ft_in12k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 197, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@article{cherti2022reproducible,
title={Reproducible scaling laws for contrastive language-image learning},
author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia},
journal={arXiv preprint arXiv:2212.07143},
year={2022}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
ChoboAvenger/DialoGPT-small-joshua | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-21T20:03:02Z | ---
license: other
tags:
- generated_from_keras_callback
model-index:
- name: nateraw/mit-b0-finetuned-sidewalks-v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nateraw/mit-b0-finetuned-sidewalks-v2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1508
- Validation Loss: 0.4301
- Validation Mean Iou: 0.3689
- Validation Mean Accuracy: 0.4261
- Validation Overall Accuracy: 0.8878
- Validation Per Category Iou: [0. 0.82155443 0.88837272 0.80869927 0.84681809 0.50445633
nan 0.5062558 0.58202362 0.09694114 0.86506226 0.10300594
0. 0.03122511 0. 0.55651564 0. 0.
0.76493797 0.04021662 0.40453306 0.56038987 0.34382567 nan
0.02428609 0.30885576 0.28811326 0. 0.87087236 0.74857511
0.94321046 0.02300712 0.03721037 0.20366003 0. ]
- Validation Per Category Accuracy: [0. 0.88109026 0.95044945 0.85142397 0.95993416 0.6370042
nan 0.65971511 0.81045852 0.11321606 0.95401169 0.10670369
0. 0.04042904 0. 0.66801313 0. 0.
0.90595882 0.04265001 0.5292762 0.61230561 0.4092219 nan
0.0283755 0.37721503 0.3266398 0. 0.950358 0.87250445
0.96996696 0.02583519 0.09486859 0.28234463 0. ]
- Epoch: 49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 6e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Validation Mean Iou | Validation Mean Accuracy | Validation Overall Accuracy | Validation Per Category Iou | Validation Per Category Accuracy | Epoch |
|:----------:|:---------------:|:-------------------:|:------------------------:|:---------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----:|
| 1.4089 | 0.8220 | 0.1975 | 0.2427 | 0.7701 | [0. 0.58353931 0.7655921 0.04209491 0.53135026 0.11779776
nan 0.07709853 0.15950712 0. 0.69634813 0.
0. 0. 0. 0. 0. 0.
0.61456822 0. 0.24971248 0.27129675 0. nan
0. 0.07697324 0. 0. 0.78576516 0.61267064
0.84564576 0. 0. 0.08904216 0. ] | [0. 0.88026971 0.93475302 0.04216372 0.5484085 0.13285614
nan 0.08669707 0.19044773 0. 0.90089024 0.
0. 0. 0. 0. 0. 0.
0.76783975 0. 0.42102101 0.28659817 0. nan
0. 0.08671771 0. 0. 0.89590301 0.74932576
0.9434814 0. 0. 0.14245566 0. ] | 0 |
| 0.8462 | 0.6135 | 0.2551 | 0.2960 | 0.8200 | [0. 0.66967645 0.80571406 0.56416239 0.66692248 0.24744912
nan 0.23994505 0.28962463 0. 0.76504783 0.
0. 0. 0. 0.14111353 0. 0.
0.6924468 0. 0.27988701 0.41876094 0. nan
0. 0.14755829 0. 0. 0.81614463 0.68429711
0.87710938 0. 0. 0.11234171 0. ] | [0. 0.83805933 0.94928385 0.59586511 0.72913519 0.30595504
nan 0.3128234 0.34805831 0. 0.87847495 0.
0. 0. 0. 0.14205167 0. 0.
0.87543619 0. 0.36001144 0.49498574 0. nan
0. 0.18179115 0. 0. 0.92867923 0.7496178
0.92220166 0. 0. 0.15398549 0. ] | 1 |
| 0.7134 | 0.5660 | 0.2780 | 0.3320 | 0.8286 | [0. 0.64791461 0.83800512 0.67301044 0.68120631 0.27361472
nan 0.26715802 0.43596999 0. 0.78649287 0.
0. 0. 0. 0.41256964 0. 0.
0.71114766 0. 0.31646321 0.44682442 0. nan
0. 0.17132551 0. 0. 0.81845697 0.67536699
0.88940936 0. 0. 0.1304862 0. ] | [0. 0.85958877 0.92084269 0.82341633 0.74725972 0.33495972
nan 0.40755277 0.56591531 0. 0.90641721 0.
0. 0. 0. 0.48144408 0. 0.
0.88294811 0. 0.46962078 0.47517397 0. nan
0. 0.20631607 0. 0. 0.90956851 0.85856042
0.94107052 0. 0. 0.16669713 0. ] | 2 |
| 0.6320 | 0.5173 | 0.2894 | 0.3454 | 0.8435 | [0. 0.70789146 0.84902296 0.65266358 0.76099965 0.32934391
nan 0.29576422 0.43988204 0. 0.79276447 0.
0. 0. 0. 0.42668367 0. 0.
0.71717911 0. 0.32151249 0.50084444 0. nan
0. 0.18711455 0. 0. 0.82903803 0.68990498
0.8990059 0. 0.00213015 0.14819771 0. ] | [0. 0.84048763 0.93514369 0.68355212 0.88302113 0.458816
nan 0.38623272 0.69456442 0. 0.92379471 0.
0. 0. 0. 0.50677438 0. 0.
0.90362965 0. 0.4662386 0.57368294 0. nan
0. 0.23281768 0. 0. 0.9001526 0.86786434
0.95195314 0. 0.00333751 0.18532191 0. ] | 3 |
| 0.5609 | 0.5099 | 0.2920 | 0.3599 | 0.8385 | [0. 0.70817583 0.84131144 0.66573523 0.81449696 0.38891117
nan 0.28124784 0.42659255 0. 0.80855146 0.
0. 0. 0. 0.46011866 0. 0.
0.65458792 0. 0.28411565 0.46758138 0. nan
0. 0.21849067 0. 0. 0.83829062 0.71207623
0.89929169 0. 0.02846127 0.13782635 0. ] | [0. 0.88632871 0.91269832 0.79044294 0.88368528 0.57405218
nan 0.35035973 0.77610775 0. 0.8889696 0.
0. 0. 0. 0.6020786 0. 0.
0.74586521 0. 0.61602403 0.54519561 0. nan
0. 0.28447396 0. 0. 0.94520232 0.85544414
0.95994042 0. 0.04680851 0.21407134 0. ] | 4 |
| 0.5256 | 0.4741 | 0.3045 | 0.3598 | 0.8558 | [0.00000000e+00 7.50159008e-01 8.53654462e-01 6.44928131e-01
7.90455244e-01 4.33599913e-01 nan 3.33472954e-01
4.74502513e-01 0.00000000e+00 8.01366017e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 4.67653814e-01
0.00000000e+00 0.00000000e+00 7.27412479e-01 0.00000000e+00
4.18946113e-01 5.04714837e-01 0.00000000e+00 nan
0.00000000e+00 2.00373855e-01 0.00000000e+00 0.00000000e+00
8.50200795e-01 7.41636173e-01 9.08320534e-01 2.77259907e-04
0.00000000e+00 1.45430716e-01 0.00000000e+00] | [0.00000000e+00 8.86487233e-01 9.05201886e-01 7.23139265e-01
8.91929263e-01 7.26675641e-01 nan 4.36386295e-01
6.64378543e-01 0.00000000e+00 8.89056843e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 5.65450644e-01
0.00000000e+00 0.00000000e+00 9.27446136e-01 0.00000000e+00
5.36031025e-01 5.84198054e-01 0.00000000e+00 nan
0.00000000e+00 2.42514534e-01 0.00000000e+00 0.00000000e+00
9.31954754e-01 8.26849708e-01 9.59880377e-01 2.79039335e-04
0.00000000e+00 1.77106051e-01 0.00000000e+00] | 5 |
| 0.4761 | 0.4922 | 0.3036 | 0.3754 | 0.8517 | [0.00000000e+00 7.18490241e-01 8.54701589e-01 5.90903088e-01
8.21902743e-01 4.76229883e-01 nan 3.32447673e-01
4.80642540e-01 0.00000000e+00 8.02904449e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 4.73285636e-01
0.00000000e+00 0.00000000e+00 7.16608930e-01 0.00000000e+00
3.16598081e-01 5.12540924e-01 0.00000000e+00 nan
0.00000000e+00 2.27702968e-01 0.00000000e+00 0.00000000e+00
8.51831675e-01 7.39827330e-01 9.07152231e-01 5.59070700e-04
3.70370370e-02 1.56538301e-01 0.00000000e+00] | [0.00000000e+00 9.20834531e-01 8.92075255e-01 7.48664032e-01
9.03709011e-01 7.40703529e-01 nan 4.40828188e-01
7.92719139e-01 0.00000000e+00 9.21593374e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 6.90292855e-01
0.00000000e+00 0.00000000e+00 8.42229041e-01 0.00000000e+00
4.75170857e-01 6.72591473e-01 0.00000000e+00 nan
0.00000000e+00 2.94713089e-01 0.00000000e+00 0.00000000e+00
9.26034809e-01 8.39522012e-01 9.66679296e-01 6.06188900e-04
1.12807676e-01 2.07280968e-01 0.00000000e+00] | 6 |
| 0.4495 | 0.4797 | 0.3035 | 0.3702 | 0.8468 | [0.00000000e+00 7.52163526e-01 8.46563375e-01 7.16396797e-01
7.38850637e-01 3.93073019e-01 nan 3.31795957e-01
4.92991567e-01 0.00000000e+00 8.11302090e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 5.16059849e-01
0.00000000e+00 0.00000000e+00 6.56058294e-01 1.25948501e-02
2.66942435e-01 5.34406894e-01 0.00000000e+00 nan
0.00000000e+00 2.27750085e-01 4.86381323e-04 0.00000000e+00
8.48618960e-01 7.25828093e-01 9.17747637e-01 8.28380212e-03
6.74590297e-02 1.51281596e-01 0.00000000e+00] | [0.00000000e+00 8.75360044e-01 9.43650850e-01 8.78658645e-01
7.76578096e-01 4.85757596e-01 nan 4.30901582e-01
7.54126335e-01 0.00000000e+00 9.30112537e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 6.42914247e-01
0.00000000e+00 0.00000000e+00 7.57605356e-01 1.27102686e-02
6.50888458e-01 6.94757080e-01 0.00000000e+00 nan
0.00000000e+00 2.91727649e-01 4.86381323e-04 0.00000000e+00
9.42251577e-01 8.60753175e-01 9.56778008e-01 8.51551074e-03
1.38756779e-01 1.83583708e-01 0.00000000e+00] | 7 |
| 0.4193 | 0.4487 | 0.3073 | 0.3633 | 0.8594 | [0. 0.77081114 0.86089485 0.64464211 0.82962632 0.36186873
nan 0.39092332 0.5399988 0. 0.81734925 0.
0. 0. 0. 0.50271555 0. 0.
0.70239658 0. 0.30875695 0.52195319 0. nan
0. 0.20124517 0.00696273 0. 0.84526591 0.72563399
0.91703372 0. 0.03526147 0.15693635 0. ] | [0. 0.8654775 0.95711297 0.70665759 0.93130714 0.42436958
nan 0.52892143 0.69243377 0. 0.91682626 0.
0. 0. 0. 0.62315913 0. 0.
0.86251114 0. 0.5607807 0.70416055 0. nan
0. 0.24483525 0.00698305 0. 0.921099 0.81848055
0.96789871 0. 0.06891948 0.18778302 0. ] | 8 |
| 0.3883 | 0.4824 | 0.3086 | 0.3690 | 0.8527 | [0. 0.76454291 0.86544951 0.70501066 0.77912256 0.39088976
nan 0.40275725 0.53334923 0. 0.82777802 0.
0. 0. 0. 0.49916177 0. 0.
0.68780083 0.01500768 0.31589145 0.53805504 0. nan
0. 0.22450413 0.03544121 0. 0.82663975 0.60689445
0.91513911 0.12702194 0.0163284 0.10604071 0. ] | [0. 0.86846682 0.93345513 0.77258597 0.90365389 0.54440067
nan 0.51997559 0.73323435 0. 0.92499729 0.
0. 0. 0. 0.62015064 0. 0.
0.8190305 0.01503264 0.61258781 0.62514291 0. nan
0. 0.28141855 0.03574903 0. 0.95838638 0.66828866
0.96505306 0.19804095 0.04463913 0.1315269 0. ] | 9 |
| 0.3736 | 0.4515 | 0.3180 | 0.3859 | 0.8600 | [0. 0.77296038 0.8679117 0.60122746 0.84573808 0.42877201
nan 0.40372521 0.5356554 0. 0.82057963 0.
0. 0. 0. 0.48309209 0. 0.
0.70156487 0.07165346 0.31172072 0.45383525 0. nan
0. 0.26337213 0.07457255 0. 0.85227381 0.7079085
0.92271657 0.20363628 0.03853875 0.13249146 0. ] | [0. 0.90081404 0.93156248 0.71723323 0.91251575 0.57187527
nan 0.53665381 0.74547838 0. 0.93718616 0.
0. 0. 0. 0.6410839 0. 0.
0.80529967 0.07249561 0.6074764 0.5775282 0. nan
0. 0.34898163 0.07545859 0. 0.95221746 0.80297775
0.96768443 0.26155608 0.19382562 0.17354842 0. ] | 10 |
| 0.3487 | 0.4486 | 0.3181 | 0.3898 | 0.8637 | [0. 0.79416982 0.87767891 0.70942695 0.81634288 0.46749785
nan 0.42873013 0.48671464 0. 0.82752704 0.
0. 0. 0. 0.50844774 0. 0.
0.68070149 0.03976498 0.29304387 0.46322705 0. nan
0. 0.24856882 0.12795031 0. 0.84646906 0.71781094
0.92550642 0.04810685 0.04610752 0.14423047 0. ] | [0. 0.86951324 0.95247608 0.82408892 0.90393017 0.59760857
nan 0.5760741 0.83602638 0. 0.93420702 0.
0. 0. 0. 0.63502483 0. 0.
0.76902695 0.04024918 0.57179186 0.75842139 0. nan
0. 0.30837498 0.13239994 0. 0.95283514 0.78607095
0.96594744 0.05354669 0.18906967 0.2060098 0. ] | 11 |
| 0.3460 | 0.4342 | 0.3234 | 0.3852 | 0.8669 | [0. 0.76828673 0.86958873 0.66044471 0.84588115 0.46323947
nan 0.41208499 0.54202812 0. 0.82543751 0.
0. 0. 0. 0.50071248 0. 0.
0.72333932 0.0173886 0.36535728 0.5284402 0. nan
0. 0.24239821 0.13456635 0. 0.86084123 0.73217705
0.92386442 0.09545854 0.04193608 0.11945951 0. ] | [0. 0.92666259 0.91906703 0.74134089 0.92518489 0.60022437
nan 0.56316038 0.77045814 0. 0.93600314 0.
0. 0. 0. 0.61358664 0. 0.
0.87835072 0.01757469 0.57608316 0.64108174 0. nan
0. 0.30432247 0.13750695 0. 0.93332326 0.85806371
0.96442783 0.10753599 0.15152274 0.14552189 0. ] | 12 |
| 0.3146 | 0.4175 | 0.3339 | 0.3995 | 0.8745 | [0. 0.81054591 0.88286867 0.68551149 0.86089895 0.4562385
nan 0.4522713 0.55496016 0.01456189 0.83576109 0.
0. 0. 0. 0.50709788 0. 0.
0.73464008 0.00175153 0.35021502 0.57263292 0. nan
0. 0.25185222 0.14419755 0. 0.85952374 0.70281003
0.9270307 0.17660456 0.04867831 0.18762581 0. ] | [0. 0.9092016 0.94168672 0.86545289 0.89611216 0.55273728
nan 0.61409823 0.76682349 0.01569689 0.92776282 0.
0. 0. 0. 0.59972229 0. 0.
0.86700656 0.00175747 0.54181633 0.67419762 0. nan
0. 0.3252672 0.14789466 0. 0.9316378 0.88743565
0.97060047 0.33277846 0.15319149 0.25967892 0. ] | 13 |
| 0.3000 | 0.4196 | 0.3263 | 0.3833 | 0.8720 | [0.00000000e+00 8.02547730e-01 8.74182776e-01 6.55641045e-01
8.69918767e-01 4.12920686e-01 nan 4.34054109e-01
5.54604573e-01 3.14830157e-03 8.29634841e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 4.98619437e-01
0.00000000e+00 0.00000000e+00 7.20371619e-01 1.62799781e-02
3.73295478e-01 5.20323501e-01 0.00000000e+00 nan
3.48000087e-04 2.41829304e-01 1.50045164e-01 0.00000000e+00
8.67415087e-01 7.31957881e-01 9.29791719e-01 1.28032094e-01
2.77808135e-02 1.25956544e-01 0.00000000e+00] | [0.00000000e+00 9.10809038e-01 9.53614030e-01 6.91330346e-01
9.25106631e-01 4.73740259e-01 nan 5.64222160e-01
7.49045544e-01 3.42805593e-03 9.38335743e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 5.77484642e-01
0.00000000e+00 0.00000000e+00 8.68434883e-01 1.63507406e-02
5.76763406e-01 7.07811962e-01 0.00000000e+00 nan
3.51671539e-04 3.02660657e-01 1.55815731e-01 0.00000000e+00
9.39832349e-01 8.43146236e-01 9.70195728e-01 2.11579170e-01
1.06049228e-01 1.61502816e-01 0.00000000e+00] | 14 |
| 0.3000 | 0.4375 | 0.3296 | 0.4004 | 0.8666 | [0. 0.78266617 0.87516084 0.70472612 0.86490176 0.45228049
nan 0.42625351 0.54739354 0. 0.82459025 0.
0. 0. 0. 0.51809119 0. 0.
0.69081711 0.12347692 0.35720113 0.50921058 0. nan
0.00489936 0.24630062 0.14805039 0. 0.86169724 0.71926146
0.92796331 0.08257639 0.06410606 0.14539247 0. ] | [0. 0.9075929 0.9264549 0.93787289 0.92618179 0.57743083
nan 0.55003982 0.78286607 0. 0.94643176 0.
0. 0. 0. 0.62538921 0. 0.
0.80130182 0.13309691 0.69176706 0.69506169 0. nan
0.00507726 0.30979772 0.15393969 0. 0.93923901 0.84161243
0.9636732 0.1240378 0.17630371 0.19733096 0. ] | 15 |
| 0.2958 | 0.4558 | 0.3321 | 0.3960 | 0.8649 | [0.00000000e+00 7.61108709e-01 8.60621205e-01 6.66132134e-01
8.52805958e-01 4.61529893e-01 nan 4.08367412e-01
5.31449716e-01 0.00000000e+00 8.35699926e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 5.07030790e-01
0.00000000e+00 0.00000000e+00 7.07129610e-01 6.48353710e-02
3.15606022e-01 5.11721371e-01 0.00000000e+00 nan
6.30311903e-04 2.73874288e-01 2.03863944e-01 0.00000000e+00
8.66259515e-01 7.58237242e-01 9.29139752e-01 2.50199629e-01
3.09762934e-02 1.61355571e-01 0.00000000e+00] | [0.00000000e+00 9.26534567e-01 9.27389090e-01 7.04037518e-01
9.24733729e-01 5.57765301e-01 nan 5.03121563e-01
8.16946898e-01 0.00000000e+00 9.33051726e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 6.15985021e-01
0.00000000e+00 0.00000000e+00 8.13389275e-01 6.50734371e-02
6.70053085e-01 6.00421712e-01 0.00000000e+00 nan
6.81363606e-04 3.40217727e-01 2.25090328e-01 0.00000000e+00
9.48289295e-01 8.67999126e-01 9.71413074e-01 3.64800631e-01
8.85273258e-02 2.01834862e-01 0.00000000e+00] | 16 |
| 0.2734 | 0.4359 | 0.3324 | 0.3921 | 0.8700 | [0.00000000e+00 7.69753150e-01 8.66375446e-01 7.12734804e-01
8.60891290e-01 4.75851982e-01 nan 4.34175617e-01
5.46685274e-01 3.24503811e-02 8.38429371e-01 6.02288697e-04
0.00000000e+00 0.00000000e+00 0.00000000e+00 4.86131744e-01
0.00000000e+00 0.00000000e+00 7.27408322e-01 8.04817093e-02
3.54733831e-01 5.42941315e-01 0.00000000e+00 nan
2.71841412e-04 2.58488818e-01 1.79625596e-01 0.00000000e+00
8.66870562e-01 7.48169480e-01 9.27856685e-01 7.49529761e-02
5.06564228e-02 1.31117070e-01 0.00000000e+00] | [0.00000000e+00 9.25702577e-01 9.27332221e-01 7.66360378e-01
9.04612005e-01 5.93608265e-01 nan 6.05902185e-01
7.70458139e-01 3.87911592e-02 9.33542542e-01 1.19000397e-03
0.00000000e+00 0.00000000e+00 0.00000000e+00 5.85184718e-01
0.00000000e+00 0.00000000e+00 8.77167736e-01 9.19689932e-02
5.50758130e-01 6.18987337e-01 0.00000000e+00 nan
2.85733125e-04 3.36777474e-01 1.97679266e-01 0.00000000e+00
9.34506301e-01 8.56945493e-01 9.73108663e-01 8.09021630e-02
2.00250313e-01 1.67017963e-01 0.00000000e+00] | 17 |
| 0.2616 | 0.4510 | 0.3349 | 0.4009 | 0.8726 | [0. 0.79136177 0.88249795 0.78534299 0.86229779 0.47690438
nan 0.43122561 0.54187893 0.01320356 0.83487865 0.01648963
0. 0.00144389 0. 0.51996092 0. 0.
0.70356624 0.09487574 0.37358034 0.43221057 0. nan
0.05030614 0.28552575 0.19602978 0. 0.86214818 0.75003141
0.92402054 0.03367055 0.0295635 0.15961857 0. ] | [0. 0.9308775 0.94322916 0.85211162 0.90509242 0.57554177
nan 0.5785717 0.8193957 0.0157871 0.93896272 0.03232844
0. 0.00144389 0. 0.6718842 0. 0.
0.80146551 0.09782199 0.57800629 0.60986079 0. nan
0.05345407 0.40116852 0.21407727 0. 0.93948742 0.87904777
0.97272876 0.03561119 0.11522737 0.2649179 0. ] | 18 |
| 0.2570 | 0.4381 | 0.3378 | 0.4040 | 0.8691 | [0. 0.78633412 0.8781239 0.70951789 0.85768155 0.49725305
nan 0.4385802 0.5419402 0.01325455 0.84049064 0.03469167
0. 0. 0. 0.52032603 0. 0.
0.68820155 0.07929718 0.30712852 0.51640481 0. nan
0.01769049 0.26803817 0.21887178 0. 0.85998636 0.71539146
0.93235425 0.24885785 0.05621853 0.11969413 0. ] | [0. 0.91321796 0.93586512 0.7493935 0.91472526 0.63834931
nan 0.58292224 0.81417994 0.01497519 0.94252235 0.05394685
0. 0. 0. 0.64331398 0. 0.
0.82029437 0.08115742 0.56811405 0.59644195 0. nan
0.01995736 0.34179208 0.24586576 0. 0.94413845 0.83304234
0.96807676 0.34801978 0.20125156 0.15898892 0. ] | 19 |
| 0.2617 | 0.4168 | 0.3396 | 0.3963 | 0.8781 | [0.00000000e+00 7.94986290e-01 8.78321279e-01 7.49897343e-01
8.49326301e-01 5.23130579e-01 nan 4.50929207e-01
5.51662857e-01 2.18050542e-02 8.41160082e-01 1.61248710e-02
0.00000000e+00 0.00000000e+00 0.00000000e+00 4.99800580e-01
0.00000000e+00 0.00000000e+00 7.33030551e-01 3.70162822e-02
3.87012787e-01 5.37036435e-01 0.00000000e+00 nan
2.52828519e-04 2.58401363e-01 2.18729726e-01 0.00000000e+00
8.68371051e-01 7.68056025e-01 9.33727233e-01 9.82409932e-02
3.83513478e-02 1.51214616e-01 0.00000000e+00] | [0.00000000e+00 9.07468689e-01 9.33071883e-01 8.06640187e-01
9.49407168e-01 6.81786840e-01 nan 5.64532420e-01
8.05049940e-01 2.72440235e-02 9.42797113e-01 2.47917493e-02
0.00000000e+00 0.00000000e+00 0.00000000e+00 5.80009257e-01
0.00000000e+00 0.00000000e+00 8.80850860e-01 3.73148381e-02
5.59445628e-01 5.88173859e-01 0.00000000e+00 nan
2.63753654e-04 3.24654954e-01 2.34262090e-01 0.00000000e+00
9.43142842e-01 8.79414683e-01 9.68600549e-01 1.42839273e-01
9.82895286e-02 1.98829554e-01 0.00000000e+00] | 20 |
| 0.2444 | 0.4463 | 0.3361 | 0.3952 | 0.8747 | [0.00000000e+00 8.09356188e-01 8.73758666e-01 7.03484664e-01
8.50663613e-01 4.40395666e-01 nan 4.70255723e-01
5.66815751e-01 5.21693766e-04 8.47186281e-01 1.10941303e-02
0.00000000e+00 0.00000000e+00 0.00000000e+00 5.16683571e-01
0.00000000e+00 0.00000000e+00 7.12799896e-01 5.64981182e-02
3.71929696e-01 5.03181014e-01 0.00000000e+00 nan
1.16099071e-03 2.60644571e-01 2.24766447e-01 0.00000000e+00
8.67722068e-01 7.66105359e-01 9.35288074e-01 1.30229066e-01
4.53205481e-02 1.26864531e-01 0.00000000e+00] | [0.00000000e+00 9.19159433e-01 9.48361579e-01 7.32578878e-01
9.32163864e-01 5.01168899e-01 nan 5.99960700e-01
7.73417917e-01 5.41271989e-04 9.21135986e-01 1.94367315e-02
0.00000000e+00 0.00000000e+00 0.00000000e+00 6.32331903e-01
0.00000000e+00 0.00000000e+00 8.67884025e-01 5.84201607e-02
5.18153787e-01 7.57499634e-01 0.00000000e+00 nan
1.25282986e-03 3.28637485e-01 2.39056420e-01 0.00000000e+00
9.39480431e-01 8.67690868e-01 9.70610826e-01 2.31997152e-01
1.16729245e-01 1.65141676e-01 0.00000000e+00] | 21 |
| 0.2327 | 0.4708 | 0.3333 | 0.3991 | 0.8674 | [0.00000000e+00 8.01482811e-01 8.67112634e-01 7.26469941e-01
8.47789494e-01 4.26344060e-01 nan 4.59877772e-01
5.61767489e-01 3.08784808e-02 8.50980045e-01 8.21140639e-04
0.00000000e+00 5.56009812e-02 0.00000000e+00 5.19460186e-01
0.00000000e+00 0.00000000e+00 6.66718429e-01 1.00749376e-01
3.06011822e-01 4.73609191e-01 0.00000000e+00 nan
5.21670878e-02 2.74661980e-01 2.16300138e-01 0.00000000e+00
8.70513680e-01 7.58257933e-01 9.31855744e-01 4.02310154e-04
5.41590367e-02 1.45066810e-01 0.00000000e+00] | [0.00000000e+00 8.96213795e-01 9.66916102e-01 7.73539637e-01
8.80813661e-01 4.61368313e-01 nan 6.05202376e-01
7.84196522e-01 3.49120433e-02 9.33452042e-01 1.09083697e-03
0.00000000e+00 5.61056106e-02 0.00000000e+00 6.42177901e-01
0.00000000e+00 0.00000000e+00 7.70254403e-01 1.04428195e-01
7.01999428e-01 6.97985197e-01 0.00000000e+00 nan
5.98940589e-02 3.48124479e-01 2.34748471e-01 0.00000000e+00
9.37775670e-01 8.78224080e-01 9.72937862e-01 4.32992071e-04
2.44305382e-01 1.83551218e-01 0.00000000e+00] | 22 |
| 0.2307 | 0.4395 | 0.3472 | 0.4131 | 0.8740 | [0. 0.78921013 0.87836164 0.7238651 0.85405051 0.48305222
nan 0.46174517 0.55335413 0.01711339 0.84971971 0.07427615
0. 0.19647651 0. 0.52684296 0. 0.
0.70735442 0.11184526 0.39826268 0.50387815 0. nan
0.05343915 0.27936942 0.23151827 0. 0.87119512 0.76032244
0.93287485 0.00244547 0.02821955 0.16774339 0. ] | [0. 0.92820839 0.93256894 0.76932845 0.93401204 0.61981657
nan 0.61634072 0.81936678 0.01975643 0.94590099 0.1012495
0. 0.2415429 0. 0.6241269 0. 0.
0.81714715 0.1280285 0.64358053 0.67924551 0. nan
0.06437787 0.34337226 0.26928155 0. 0.94337279 0.8780219
0.97322357 0.00300208 0.10171047 0.23578672 0. ] | 23 |
| 0.2314 | 0.4319 | 0.3499 | 0.4182 | 0.8767 | [0. 0.81268914 0.88090286 0.77985058 0.85570746 0.45828635
nan 0.47299524 0.53277529 0.04378213 0.84593985 0.03478787
0. 0.1971561 0. 0.54043589 0. 0.
0.7359562 0.05923714 0.36875898 0.54834606 0. nan
0.0091682 0.2642792 0.23962131 0. 0.86379601 0.6984222
0.9344155 0.19722481 0.0095982 0.16301427 0. ] | [0. 0.89305484 0.93568687 0.88020793 0.94148981 0.58722529
nan 0.61994664 0.81367861 0.04916554 0.95151603 0.06277271
0. 0.25453795 0. 0.66359505 0. 0.
0.86745813 0.05975395 0.56066626 0.69054391 0. nan
0.01118755 0.32281238 0.27171345 0. 0.9460712 0.8195451
0.97224534 0.37103572 0.02778473 0.2253981 0. ] | 24 |
| 0.2280 | 0.4165 | 0.3413 | 0.4040 | 0.8751 | [0. 0.80627187 0.88428648 0.68558419 0.86189479 0.47772167
nan 0.46152926 0.58013249 0.03674413 0.85526933 0.00621328
0. 0.00391914 0. 0.51552043 0. 0.
0.7219038 0.11633219 0.37078391 0.48759114 0. nan
0.04097689 0.25837403 0.23783935 0. 0.86459819 0.69037029
0.93378787 0.16489662 0.01161391 0.18831714 0. ] | [0. 0.90036072 0.93838536 0.72247672 0.95316477 0.6161249
nan 0.61164372 0.77422776 0.05448805 0.94020362 0.00912336
0. 0.00391914 0. 0.64499705 0. 0.
0.8629953 0.12306992 0.55344734 0.64075339 0. nan
0.04945381 0.31623508 0.26601584 0. 0.93210162 0.89534679
0.97086444 0.24202332 0.02736754 0.28242179 0. ] | 25 |
| 0.2332 | 0.4404 | 0.3516 | 0.4245 | 0.8748 | [0. 0.79554848 0.87301633 0.83834717 0.85459995 0.51844513
nan 0.47147093 0.51906075 0.07904743 0.85844229 0.11856842
0. 0.20806579 0. 0.49652457 0. 0.
0.71537872 0.12687546 0.36603458 0.48369829 0. nan
0.01636133 0.26809207 0.24211655 0. 0.86537401 0.74525303
0.93088227 0.00370986 0.03710524 0.17143043 0. ] | [0. 0.90971751 0.92232534 0.87320268 0.9481476 0.68155153
nan 0.66184212 0.82753268 0.10690122 0.9341899 0.13962713
0. 0.38098185 0. 0.59963814 0. 0.
0.82473535 0.13202988 0.64751899 0.62376621 0. nan
0.01872651 0.32878909 0.28248332 0. 0.9485047 0.86660759
0.97579904 0.00397391 0.12273675 0.24574891 0. ] | 26 |
| 0.2146 | 0.4515 | 0.3472 | 0.4129 | 0.8760 | [0. 0.77768215 0.87939478 0.75655563 0.85872515 0.51912277
nan 0.47242163 0.57270343 0.04912978 0.84772409 0.04133148
0. 0.28216704 0. 0.50142357 0. 0.
0.72646668 0.08708308 0.41888468 0.46264328 0. nan
0.00952928 0.24906863 0.23188316 0. 0.86985544 0.75426408
0.93570207 0. 0.00161353 0.15174018 0. ] | [0. 0.91082524 0.9223558 0.86526048 0.93459861 0.70221741
nan 0.61752143 0.76844318 0.064682 0.94790176 0.04432765
0. 0.38675743 0. 0.58171337 0. 0.
0.86150488 0.08919156 0.64534791 0.76292334 0. nan
0.01094578 0.29978017 0.26035297 0. 0.9390376 0.84793111
0.96900737 0. 0.00383813 0.18862775 0. ] | 27 |
| 0.2245 | 0.4819 | 0.3481 | 0.4183 | 0.8677 | [0. 0.74754716 0.87589221 0.7595096 0.75176585 0.46424109
nan 0.43492805 0.55661905 0.04973311 0.85506372 0.1407866
0. 0.15455217 0. 0.4863142 0. 0.
0.72532016 0.16110796 0.37871237 0.54738549 0. nan
0.03596414 0.27015132 0.27383189 0. 0.87155837 0.74696253
0.93097913 0.03859201 0.03974808 0.19051036 0. ] | [0. 0.92496748 0.93198417 0.91066332 0.76595448 0.59939476
nan 0.57168392 0.76530022 0.06892197 0.94636898 0.19843316
0. 0.30218647 0. 0.5681646 0. 0.
0.83417317 0.18737447 0.68308592 0.66512645 0. nan
0.03932127 0.32249168 0.30377988 0. 0.94566641 0.86854326
0.97023299 0.04389577 0.11848144 0.26846741 0. ] | 28 |
| 0.2265 | 0.4246 | 0.3481 | 0.4050 | 0.8802 | [0. 0.81690969 0.8780207 0.77104697 0.84868605 0.47437381
nan 0.4670048 0.56430385 0.0503272 0.85498949 0.10595414
0. 0.11204925 0. 0.53524176 0. 0.
0.74013112 0.04461066 0.39307836 0.51529041 0. nan
0.01267638 0.28575942 0.24784411 0. 0.87024598 0.76078812
0.93874897 0.00447477 0.03791316 0.15759319 0. ] | [0. 0.90897891 0.95223635 0.79356326 0.91900275 0.59798835
nan 0.59817326 0.81432455 0.05827695 0.94238457 0.11364538
0. 0.13139439 0. 0.65023563 0. 0.
0.88394096 0.04698092 0.61423758 0.59820238 0. nan
0.01437457 0.35860267 0.28456782 0. 0.93586082 0.86061465
0.96934172 0.00557116 0.10738423 0.20431221 0. ] | 29 |
| 0.2067 | 0.4302 | 0.3582 | 0.4204 | 0.8782 | [0. 0.80612851 0.87374492 0.77399105 0.80074838 0.5069675
nan 0.46617634 0.55019827 0.13081984 0.85635853 0.13375531
0. 0.27468398 0. 0.54140892 0. 0.
0.7462191 0.05761157 0.39168026 0.5446341 0. nan
0.01506284 0.29841736 0.22899218 0. 0.87061226 0.74041425
0.93770948 0.13239158 0.00821369 0.13209342 0. ] | [0. 0.90557948 0.95169451 0.78927679 0.84910476 0.62275304
nan 0.60393893 0.82533454 0.18655841 0.94690627 0.1592622
0. 0.33168317 0. 0.68313978 0. 0.
0.88132122 0.06502636 0.62727359 0.66106362 0. nan
0.01804515 0.38855037 0.25931073 0. 0.94157236 0.8393111
0.97364591 0.18567662 0.01985816 0.15624759 0. ] | 30 |
| 0.1993 | 0.4191 | 0.3525 | 0.4026 | 0.8855 | [0.00000000e+00 8.12143438e-01 8.82519501e-01 8.32421151e-01
8.74313051e-01 4.81253823e-01 nan 4.87073361e-01
5.86132068e-01 9.62771937e-02 8.59982957e-01 8.85474149e-02
0.00000000e+00 2.01491034e-04 0.00000000e+00 5.35389616e-01
0.00000000e+00 0.00000000e+00 7.53505814e-01 3.69389833e-02
3.98315791e-01 5.86352445e-01 0.00000000e+00 nan
3.86967641e-02 2.99523304e-01 2.23544639e-01 0.00000000e+00
8.66545952e-01 7.59345221e-01 9.36605085e-01 5.82816319e-04
7.04036476e-04 1.95231882e-01 0.00000000e+00] | [0.00000000e+00 9.18027876e-01 9.52593301e-01 8.60861913e-01
9.17933036e-01 5.59645609e-01 nan 6.69381444e-01
7.81217462e-01 1.31348669e-01 9.42924301e-01 1.03431178e-01
0.00000000e+00 2.06270627e-04 0.00000000e+00 6.30669865e-01
0.00000000e+00 0.00000000e+00 8.97879175e-01 3.70010043e-02
4.89103277e-01 6.59469339e-01 0.00000000e+00 nan
4.58931358e-02 3.79932245e-01 2.46004725e-01 0.00000000e+00
9.44760882e-01 8.53162772e-01 9.75178979e-01 5.96566854e-04
1.75219024e-03 2.87445529e-01 0.00000000e+00] | 31 |
| 0.2068 | 0.4805 | 0.3370 | 0.3952 | 0.8643 | [0. 0.77056757 0.8601312 0.79546358 0.80826542 0.46090981
nan 0.47734482 0.58905088 0.03181978 0.85901467 0.01694625
0. 0.00549451 0. 0.48326241 0. 0.
0.71413255 0.08548594 0.355285 0.56037404 0. nan
0.11377479 0.28155688 0.23155416 0. 0.84077004 0.62872483
0.94074387 0.04323906 0.00477968 0.16213294 0. ] | [0. 0.86625295 0.93033124 0.83741848 0.95175277 0.58905634
nan 0.60932022 0.77904824 0.04077582 0.93578138 0.01963507
0. 0.00556931 0. 0.57342422 0. 0.
0.8469629 0.09113733 0.63002638 0.69225687 0. nan
0.12851397 0.34756471 0.25621873 0. 0.94994706 0.68278681
0.96967504 0.05913709 0.01677096 0.23138435 0. ] | 32 |
| 0.2026 | 0.4072 | 0.3583 | 0.4178 | 0.8851 | [0. 0.82427491 0.88959049 0.80744875 0.87290087 0.48819667
nan 0.49479206 0.5689031 0.01115143 0.86655557 0.12424298
0. 0.08815858 0. 0.54211487 0. 0.
0.73966336 0.07793422 0.39875788 0.55704844 0. nan
0.12136758 0.29085268 0.26794571 0. 0.86907912 0.7468531
0.94141202 0.04189236 0.01138124 0.18175892 0. ] | [0. 0.90820792 0.94843014 0.86590797 0.935603 0.6234152
nan 0.65185001 0.7529887 0.01136671 0.9412309 0.13427211
0. 0.10457921 0. 0.6997812 0. 0.
0.87681775 0.08373086 0.56962713 0.71690686 0. nan
0.15183419 0.36895259 0.31343802 0. 0.93965024 0.8725032
0.97198241 0.05008275 0.04046725 0.25251491 0. ] | 33 |
| 0.1890 | 0.4580 | 0.3568 | 0.4229 | 0.8760 | [0. 0.81909166 0.87945582 0.84022719 0.85181051 0.44449375
nan 0.46587584 0.56531144 0.05849796 0.85834655 0.13038109
0. 0.23667513 0. 0.53010343 0. 0.
0.70286385 0.08324543 0.31813212 0.48792893 0. nan
0.08722779 0.30648587 0.25211314 0. 0.870058 0.73406627
0.93603361 0.09757433 0.04511995 0.17383064 0. ] | [0. 0.89810904 0.95241393 0.90661944 0.91497772 0.51961956
nan 0.65114676 0.83394393 0.06865133 0.95007864 0.14656882
0. 0.30775578 0. 0.61461752 0. 0.
0.81590436 0.09039982 0.61695222 0.59357779 0. nan
0.09587445 0.39630552 0.2932532 0. 0.94689981 0.84862376
0.97537462 0.14406127 0.14217772 0.23057617 0. ] | 34 |
| 0.1856 | 0.4192 | 0.3656 | 0.4327 | 0.8810 | [0. 0.80138932 0.88262901 0.81089302 0.86535724 0.47775953
nan 0.47998869 0.57502219 0.0555217 0.8547197 0.09356223
0. 0.29941446 0. 0.4994726 0. 0.
0.74427779 0.10476547 0.39876117 0.530613 0. nan
0.06260014 0.28825048 0.26307467 0. 0.86958647 0.76353802
0.93445836 0.20078295 0.02415195 0.18310195 0. ] | [0. 0.92397697 0.93724033 0.85106091 0.92750395 0.5475243
nan 0.64661523 0.82444757 0.07397384 0.94253458 0.12971043
0. 0.39026403 0. 0.56784903 0. 0.
0.88627818 0.11856641 0.54413363 0.67935232 0. nan
0.0841594 0.35805457 0.30635075 0. 0.93943008 0.852706
0.97181886 0.43921754 0.0738423 0.26218876 0. ] | 35 |
| 0.1823 | 0.4526 | 0.3522 | 0.4102 | 0.8767 | [0. 0.78316685 0.87642881 0.75047304 0.86249292 0.46791957
nan 0.49549382 0.57114384 0.08703693 0.86072555 0.10211813
0. 0.18376371 0. 0.49874928 0. 0.
0.73435033 0.05303611 0.39974749 0.45439447 0. nan
0.03187949 0.28929847 0.252677 0. 0.8723413 0.74111546
0.93814337 0.0911524 0.01717172 0.20816307 0. ] | [0. 0.92816869 0.93137636 0.78767061 0.92675339 0.59854763
nan 0.62024483 0.71829085 0.14009923 0.95105293 0.11761206
0. 0.21431518 0. 0.56629218 0. 0.
0.87048153 0.05749435 0.53241044 0.64072965 0. nan
0.03600237 0.35849772 0.27463174 0. 0.93890724 0.88137406
0.97292233 0.14287776 0.04397163 0.28736837 0. ] | 36 |
| 0.1828 | 0.4314 | 0.3540 | 0.4137 | 0.8837 | [0.00000000e+00 8.08817183e-01 8.86533437e-01 8.29367464e-01
8.66921982e-01 5.02412424e-01 nan 4.88635866e-01
5.60640323e-01 8.39031061e-02 8.56029524e-01 1.48001648e-01
0.00000000e+00 2.88729590e-02 0.00000000e+00 5.27888135e-01
0.00000000e+00 0.00000000e+00 7.40145106e-01 5.94355934e-02
3.83677842e-01 5.51371204e-01 0.00000000e+00 nan
2.54484244e-02 2.99810052e-01 2.57164681e-01 0.00000000e+00
8.66461858e-01 7.59758000e-01 9.39794819e-01 8.55545803e-03
2.93707321e-04 2.00986041e-01 0.00000000e+00] | [0.00000000e+00 9.20986697e-01 9.42897048e-01 8.79794678e-01
9.16674152e-01 6.25797872e-01 nan 6.53392696e-01
7.86385022e-01 1.54984213e-01 9.51069237e-01 1.78103927e-01
0.00000000e+00 2.99092409e-02 0.00000000e+00 6.40158209e-01
0.00000000e+00 0.00000000e+00 8.70579667e-01 6.04130053e-02
5.57868972e-01 7.35243038e-01 0.00000000e+00 nan
3.09031365e-02 3.74579444e-01 2.92419400e-01 0.00000000e+00
9.45130703e-01 8.41614926e-01 9.69892426e-01 8.91001463e-03
6.67501043e-04 2.82612669e-01 0.00000000e+00] | 37 |
| 0.1824 | 0.4277 | 0.3516 | 0.4128 | 0.8808 | [0. 0.80850849 0.8835188 0.81832156 0.87084804 0.52909381
nan 0.48544633 0.57416469 0.06544565 0.86014741 0.09572506
0. 0.04364361 0. 0.53546177 0. 0.
0.72880369 0.07815572 0.36619794 0.45105441 0. nan
0.02904442 0.31295304 0.268757 0. 0.87009835 0.77016379
0.93780115 0.02053909 0.00775031 0.19176263 0. ] | [0. 0.93462002 0.91830503 0.88981486 0.94415116 0.74919228
nan 0.63542345 0.80433651 0.09896256 0.95110348 0.11324871
0. 0.04971122 0. 0.61501725 0. 0.
0.85270915 0.08732425 0.54361868 0.58993825 0. nan
0.03250764 0.39146001 0.30016676 0. 0.9446708 0.87850239
0.97614375 0.02966477 0.01535252 0.27686197 0. ] | 38 |
| 0.1853 | 0.4315 | 0.3703 | 0.4396 | 0.8843 | [0. 0.82385333 0.88384746 0.82923402 0.87047461 0.44946715
nan 0.50195066 0.5526193 0.13775167 0.85419626 0.11663244
0. 0.24123441 0. 0.50296284 0. 0.
0.75525625 0.01028213 0.42676119 0.62702595 0.06111111 nan
0.01903464 0.32208879 0.27514231 0. 0.86733642 0.75225439
0.93747993 0.18324185 0.01936191 0.19849636 0. ] | [0. 0.90365119 0.95252047 0.86895636 0.95780426 0.51196152
nan 0.66021153 0.84474181 0.20306721 0.91545736 0.16898056
0. 0.37891914 0. 0.57321383 0. 0.
0.86912221 0.01029375 0.54801488 0.73707468 0.06340058 nan
0.02149592 0.43258561 0.31736381 0. 0.94644158 0.84006614
0.97630627 0.4108325 0.04714226 0.34773038 0. ] | 39 |
| 0.1750 | 0.4506 | 0.3588 | 0.4247 | 0.8777 | [0. 0.8085732 0.87850952 0.80281118 0.80588458 0.43532471
nan 0.49235495 0.56953178 0.04144004 0.86063777 0.1253508
0. 0.29125623 0. 0.56867524 0. 0.
0.72796854 0.03398534 0.34825502 0.58349517 0.04278075 nan
0.02356575 0.3145996 0.25091045 0. 0.86821676 0.7616297
0.94167999 0.0760155 0.01097801 0.17674597 0. ] | [0. 0.8587409 0.96059527 0.84914689 0.95691624 0.49065318
nan 0.64206647 0.80599476 0.04330176 0.94759276 0.13288378
0. 0.49401815 0. 0.68717916 0. 0.
0.83929688 0.03427065 0.62310627 0.69531488 0.04610951 nan
0.02716663 0.42729112 0.28005142 0. 0.94607512 0.87723316
0.97355068 0.11214495 0.02653317 0.23750462 0. ] | 40 |
| 0.1856 | 0.4630 | 0.3468 | 0.4169 | 0.8702 | [0. 0.79680775 0.87408906 0.80508886 0.7836208 0.54111501
nan 0.46906575 0.51571781 0.05906675 0.84646062 0.12752339
0. 0.02326951 0. 0.49396166 0. 0.
0.70301274 0.10455404 0.36326525 0.45432265 0.00288184 nan
0.06241517 0.29796314 0.27257073 0. 0.85824781 0.70264792
0.93738763 0.10944047 0.07468448 0.16378606 0. ] | [0. 0.92066362 0.93733642 0.8457505 0.80518061 0.72000372
nan 0.61879522 0.83036713 0.090212 0.93724404 0.17978977
0. 0.03259076 0. 0.52749727 0. 0.
0.80827055 0.12335237 0.6834038 0.55220964 0.00288184 nan
0.08086247 0.38878944 0.30794886 0. 0.93395437 0.8773043
0.97470902 0.14777538 0.20984564 0.21967583 0. ] | 41 |
| 0.1886 | 0.4251 | 0.3676 | 0.4456 | 0.8827 | [0. 0.81869055 0.88872516 0.83047488 0.84276443 0.50458334
nan 0.50154127 0.55514409 0.11767963 0.8593002 0.18142472
0. 0.17150681 0. 0.52765579 0. 0.
0.72523268 0.11768383 0.38784377 0.55537055 0.03125 nan
0.04156492 0.2927821 0.28836848 0. 0.87210633 0.77489312
0.94023927 0.01637101 0.07383087 0.2126436 0. ] | [0. 0.91039924 0.94692984 0.8782822 0.93049806 0.63547194
nan 0.64587233 0.81659018 0.1533604 0.95128203 0.25357001
0. 0.5554868 0. 0.59909114 0. 0.
0.81739862 0.14772157 0.70659589 0.64127558 0.03170029 nan
0.05387168 0.38205471 0.34779739 0. 0.9516963 0.85983713
0.97204659 0.02417058 0.18481435 0.30708968 0. ] | 42 |
| 0.1649 | 0.4242 | 0.3717 | 0.4383 | 0.8829 | [0. 0.8245252 0.88278 0.781006 0.85842353 0.51259623
nan 0.51245584 0.58046843 0.1180867 0.86495296 0.18057803
0. 0.19290237 0. 0.57019005 0. 0.
0.73953555 0.08630286 0.40811401 0.47419294 0.14364641 nan
0.03056729 0.30467132 0.2893461 0. 0.86773127 0.73354145
0.94299225 0.12116164 0.05836818 0.18823529 0. ] | [0. 0.89896041 0.94310935 0.79709015 0.94969977 0.67147032
nan 0.65138807 0.79877367 0.14253496 0.94740116 0.23234827
0. 0.27021452 0. 0.66277455 0. 0.
0.86192885 0.09854381 0.60905623 0.64588831 0.14985591 nan
0.03674968 0.38943084 0.3317468 0. 0.93877576 0.88623506
0.97329085 0.17635286 0.14092616 0.25900882 0. ] | 43 |
| 0.1721 | 0.4380 | 0.3659 | 0.4303 | 0.8830 | [0. 0.79204009 0.88761045 0.81838271 0.87599756 0.55268285
nan 0.51243018 0.57342413 0.1328 0.86370891 0.12697056
0. 0.01482085 0. 0.54979459 0. 0.
0.7440432 0.07280754 0.43229119 0.48547786 0.1754386 nan
0.04705945 0.29268443 0.28866261 0. 0.86370303 0.73106053
0.94208474 0.03836815 0.07654387 0.18451303 0. ] | [0. 0.91791605 0.9276758 0.9021757 0.9262825 0.70668615
nan 0.6749058 0.79231422 0.1871899 0.94815532 0.13258628
0. 0.01877063 0. 0.65320205 0. 0.
0.86297038 0.08255398 0.62974983 0.64313491 0.20172911 nan
0.05861925 0.36101085 0.32994024 0. 0.9582507 0.8229035
0.97011706 0.05883881 0.16712557 0.26603474 0. ] | 44 |
| 0.1781 | 0.4529 | 0.3574 | 0.4191 | 0.8774 | [0. 0.80590718 0.86988106 0.71453937 0.86614986 0.4249264
nan 0.47975456 0.54781776 0.09453882 0.84344987 0.10822898
0. 0.05507559 0. 0.52010855 0. 0.
0.74685485 0.08247778 0.43946981 0.52154191 0.24444444 nan
0.03570312 0.29856758 0.28932541 0. 0.86943451 0.76210843
0.93975656 0.04628113 0.00666137 0.17955183 0. ] | [0. 0.90777016 0.93538454 0.74532453 0.96011721 0.53476928
nan 0.67329934 0.81065134 0.10744249 0.95209245 0.16798889
0. 0.06311881 0. 0.58061937 0. 0.
0.86390726 0.09013307 0.63941956 0.64314678 0.28530259 nan
0.04266215 0.38685357 0.33004447 0. 0.94754825 0.87771988
0.97640668 0.04758102 0.0090947 0.25304287 0. ] | 45 |
| 0.1684 | 0.4189 | 0.3690 | 0.4293 | 0.8874 | [0. 0.84396057 0.88758961 0.81114547 0.88917721 0.45907522
nan 0.49766899 0.57141046 0.13114032 0.86950251 0.16812566
0. 0.05402798 0. 0.55671436 0. 0.
0.75187936 0.06411064 0.38664923 0.50159138 0.2393736 nan
0.03612036 0.31872514 0.30305254 0. 0.86860485 0.76230745
0.94340831 0.00953726 0.07691803 0.17518119 0. ] | [0. 0.91434158 0.94889397 0.87210345 0.95322397 0.59465787
nan 0.66608234 0.78205623 0.16111863 0.93973807 0.20378818
0. 0.06930693 0. 0.66147017 0. 0.
0.89656164 0.06706628 0.53578308 0.5492624 0.30835735 nan
0.04147526 0.40549508 0.35870623 0. 0.94735747 0.87335684
0.97257866 0.01137326 0.19582812 0.23648119 0. ] | 46 |
| 0.1720 | 0.4344 | 0.3638 | 0.4240 | 0.8838 | [0. 0.83634649 0.88252909 0.81745832 0.86977854 0.47517537
nan 0.5031844 0.59798619 0.1079865 0.85554815 0.12994482
0. 0.04696466 0. 0.53919303 0. 0.
0.75014771 0.0744804 0.3882156 0.59230036 0.10471204 nan
0.04368721 0.31787732 0.29543235 0. 0.86536823 0.72760094
0.94104705 0.03982311 0.02227327 0.18113074 0. ] | [0. 0.8989315 0.94316465 0.90119359 0.9497093 0.61772174
nan 0.62712399 0.73975165 0.12124493 0.94667146 0.16580722
0. 0.06415017 0. 0.68711605 0. 0.
0.87653714 0.07776801 0.5226676 0.71570028 0.11527378 nan
0.05734444 0.42000828 0.35728182 0. 0.94526778 0.87853858
0.96476425 0.06568009 0.06391322 0.27004723 0. ] | 47 |
| 0.1621 | 0.4094 | 0.3770 | 0.4361 | 0.8898 | [0. 0.82399384 0.89066451 0.819736 0.87845688 0.52339167
nan 0.51547818 0.58811266 0.11202084 0.86668833 0.16280099
0. 0.08182504 0. 0.55484888 0. 0.
0.76325378 0.05293637 0.40960729 0.5817725 0.34567901 nan
0.03438815 0.31336815 0.27887594 0. 0.87318663 0.74216819
0.94257309 0.02714967 0.0518505 0.20559452 0. ] | [0. 0.92314847 0.9443915 0.90050762 0.92923985 0.63544129
nan 0.66654946 0.76154024 0.13576906 0.95223513 0.18236811
0. 0.13428218 0. 0.62953379 0. 0.
0.89400131 0.06196648 0.56801551 0.65125268 0.40345821 nan
0.04132141 0.3929294 0.31270845 0. 0.94826999 0.87554959
0.97042967 0.03051151 0.14226116 0.3023746 0. ] | 48 |
| 0.1508 | 0.4301 | 0.3689 | 0.4261 | 0.8878 | [0. 0.82155443 0.88837272 0.80869927 0.84681809 0.50445633
nan 0.5062558 0.58202362 0.09694114 0.86506226 0.10300594
0. 0.03122511 0. 0.55651564 0. 0.
0.76493797 0.04021662 0.40453306 0.56038987 0.34382567 nan
0.02428609 0.30885576 0.28811326 0. 0.87087236 0.74857511
0.94321046 0.02300712 0.03721037 0.20366003 0. ] | [0. 0.88109026 0.95044945 0.85142397 0.95993416 0.6370042
nan 0.65971511 0.81045852 0.11321606 0.95401169 0.10670369
0. 0.04042904 0. 0.66801313 0. 0.
0.90595882 0.04265001 0.5292762 0.61230561 0.4092219 nan
0.0283755 0.37721503 0.3266398 0. 0.950358 0.87250445
0.96996696 0.02583519 0.09486859 0.28234463 0. ] | 49 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.0
- Tokenizers 0.13.2
|
ChrisP/xlm-roberta-base-finetuned-marc-en | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-21T19:56:44Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- wit-400m
- imagenet-12k
---
# Model card for vit_large_patch14_clip_336.openai_ft_in12k_in1k
A Vision Transformer (ViT) image classification model. Pretrained on WIT-400M image-text pairs by OpenAI using CLIP. Fine-tuned on ImageNet-12k and then ImageNet-1k in `timm`. See recipes in [Reproducible scaling laws](https://arxiv.org/abs/2212.07143).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 304.5
- GMACs: 174.7
- Activations (M): 128.2
- Image size: 336 x 336
- **Papers:**
- Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020
- Reproducible scaling laws for contrastive language-image learning: https://arxiv.org/abs/2212.07143
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:**
- WIT-400M
- ImageNet-12k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_large_patch14_clip_336.openai_ft_in12k_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_large_patch14_clip_336.openai_ft_in12k_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 577, 1024) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@article{cherti2022reproducible,
title={Reproducible scaling laws for contrastive language-image learning},
author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia},
journal={arXiv preprint arXiv:2212.07143},
year={2022}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
ChrisVCB/DialoGPT-medium-ej | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | 2022-11-21T20:00:09Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### stevediffusion_v2 Dreambooth model trained by daniel-comet with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
ChristopherA08/IndoELECTRA | [
"pytorch",
"electra",
"pretraining",
"id",
"dataset:oscar",
"transformers"
]
| null | {
"architectures": [
"ElectraForPreTraining"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2022-11-21T20:06:38Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-finetuned-transcriptSteve
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-transcriptSteve
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 18 | 2.6415 |
| No log | 2.0 | 36 | 2.6353 |
| No log | 3.0 | 54 | 2.6308 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Chun/DialoGPT-large-dailydialog | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2022-11-21T20:16:51Z | ---
language:
- en
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/Avrik/abstract-anim-spritesheets/resolve/main/AnimationGrid.gif"
tags:
- stable-diffusion
- text-to-image
- image-to-image
---
# Abstract Animation Sprite Sheets
An experimental Dreambooth model trained on individual frames of looping 3D animations that were then laid out on a 4x4 grid. Generates sprite sheets that can create very interesting abstract animations.
Use the token **AbstrAnm spritesheet**. Size must be set at 512x512 or your outputs may not work properly.
**Example prompt:** <i>AbstrAnm spritesheet, animation of a red glowing orb in the sky, highly detailed, fog, atmosphere, glow, sprites, animated, abstract</i>
<br>
**Negative prompt:** <i>high contrast, text, overlay</i>
<br>
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 8
Feel free to experiment with other types of prompts and/or model merges.

You can also upscale it 4x to produce 512x512 animations. Used SD Upscale from AUTOMATIC1111's web UI to add more sharpness and detail.

Discovered it's actually quite flexible and could even animate less abstract concepts.

**Prompt 1:** <i>AbstrAnm spritesheet, animation of magical swirling clouds in the clear blue sky, floating in crystal clear water, circular, sunny, timelapse, lens flare, nature, 35mm lens shot, photorealistic, sprites, animated, art by Greg Rutkowski</i>
<br>
**Negative prompt:** <i>text, overlay, abstract, boring, empty, barren, simple background</i>
<br>
Steps: 25, Sampler: DPM++ 2S a, CFG scale: 10
**Prompt 2:** <i>AbstrAnm spritesheet, animation of a beautiful flower blowing in the wind, serene, pink, sunny, timelapse, lens flare, nature, 35mm lens shot, photorealistic, sprites, animated, art by Greg Rutkowski</i>
**Negative prompt:** <i>text, overlay, abstract, boring, empty, barren, simple background</i>
<br>
Steps: 25, Sampler: DPM++ 2S a, CFG scale: 10
Some issues with this model:
- May not loop seamlessly
- Tends to be too noisy
- Sprites aren't usually perfect squares
- Small size and short animation (could experiment with training on larger resolutions in the future) |
Chun/DialoGPT-medium-dailydialog | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | 2022-11-21T20:17:36Z | ---
language: en
thumbnail: http://www.huggingtweets.com/adamscochran-fehrsam-taschalabs/1669062033978/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1504547300416364550/rFebXP9K_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1513406762904612866/-haRj3pk_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1593745112844144641/Q2zhPcdt_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tascha & Fred Ehrsam & Adam Cochran (adamscochran.eth)</div>
<div style="text-align: center; font-size: 14px;">@adamscochran-fehrsam-taschalabs</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tascha & Fred Ehrsam & Adam Cochran (adamscochran.eth).
| Data | Tascha | Fred Ehrsam | Adam Cochran (adamscochran.eth) |
| --- | --- | --- | --- |
| Tweets downloaded | 3244 | 1674 | 3242 |
| Retweets | 215 | 188 | 555 |
| Short tweets | 210 | 150 | 150 |
| Tweets kept | 2819 | 1336 | 2537 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/35tvoqtp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @adamscochran-fehrsam-taschalabs's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/fv0c31k5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/fv0c31k5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/adamscochran-fehrsam-taschalabs')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Chun/w-en2zh-hsk | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | 2022-11-21T20:21:16Z | This model is Nightmare XXX. Prompt being "NghtmrXxxFrk". To note this isn't actual porn or anything. However this takes my popular pictures that combine horrific, gross, nightmarish stuff with weird things like some minor nudity or even adult toy realm. In other words its semi-adult stuff that makes you say "WTF is this. Someone bleach my eyes!".
 |
Chun/w-zh2en-hsk | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-misinfo-model-700-Zhaohui
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-misinfo-model-700-Zhaohui
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5297
- Accuracy: 0.8857
- F1: 0.8889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Chun/w-zh2en-mtm | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/adamscochran-fehrsam-taschalabs/1669062033978/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1504547300416364550/rFebXP9K_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1513406762904612866/-haRj3pk_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1593745112844144641/Q2zhPcdt_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tascha & Fred Ehrsam & Adam Cochran (adamscochran.eth)</div>
<div style="text-align: center; font-size: 14px;">@adamscochran-fehrsam-taschalabs</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tascha & Fred Ehrsam & Adam Cochran (adamscochran.eth).
| Data | Tascha | Fred Ehrsam | Adam Cochran (adamscochran.eth) |
| --- | --- | --- | --- |
| Tweets downloaded | 3244 | 1674 | 3242 |
| Retweets | 215 | 188 | 555 |
| Short tweets | 210 | 150 | 150 |
| Tweets kept | 2819 | 1336 | 2537 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/35tvoqtp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @adamscochran-fehrsam-taschalabs's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/fv0c31k5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/fv0c31k5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/adamscochran-fehrsam-taschalabs')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Chun/w-zh2en-mto | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: cc0-1.0
---
Drop anneface.pt into your stable-diffusion-webui/embeddings folder and use prompt <anneface> to get this upset gal.

|
Cinnamon/electra-small-japanese-discriminator | [
"pytorch",
"electra",
"pretraining",
"ja",
"transformers",
"license:apache-2.0"
]
| null | {
"architectures": [
"ElectraForPreTraining"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 419 | null | ---
inference: true
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
---
# Stable Diffusion v1.5 fine tuned on Waltz with Bashir screencaps
Use prompt: 'wltzwthbshr'
[Waltz with Bashir on IMDB](https://www.imdb.com/title/tt1185616)
### Output Samples:
Settings used: "wltzwthbshr SUBJECT", euler a, 35 steps, cfg 7, 1024x1024, high res fix on, sd-vae-ft-mse-original (AUTOMATIC1111 webui)
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669067000797-637bef89ca8542a0ba8cd54b.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669066999379-637bef89ca8542a0ba8cd54b.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669067001297-637bef89ca8542a0ba8cd54b.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669067002574-637bef89ca8542a0ba8cd54b.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669067002737-637bef89ca8542a0ba8cd54b.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669067000480-637bef89ca8542a0ba8cd54b.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669066999949-637bef89ca8542a0ba8cd54b.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669067002829-637bef89ca8542a0ba8cd54b.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669067000524-637bef89ca8542a0ba8cd54b.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669066998455-637bef89ca8542a0ba8cd54b.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669067001216-637bef89ca8542a0ba8cd54b.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669067000265-637bef89ca8542a0ba8cd54b.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669067000984-637bef89ca8542a0ba8cd54b.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669067000421-637bef89ca8542a0ba8cd54b.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669067003066-637bef89ca8542a0ba8cd54b.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669067000476-637bef89ca8542a0ba8cd54b.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669067002688-637bef89ca8542a0ba8cd54b.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669067001859-637bef89ca8542a0ba8cd54b.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669067002184-637bef89ca8542a0ba8cd54b.png" width="100%"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1669067003006-637bef89ca8542a0ba8cd54b.png" width="100%"/>
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at [Stable diffusion Pipelines](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "mikesmodels/Waltz_with_Bashir_Diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "wltzwthbshr dwayne johnson"
image = pipe(prompt).images[0]
image.save("./dwayne_johnson.png")
``` |
Ciruzzo/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### stevefussion_v3 Dreambooth model trained by daniel-comet with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
Ciruzzo/DialoGPT-small-hattypotter | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# arinze/address-match-abp-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 64 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('arinze/address-match-abp-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=arinze/address-match-abp-v2)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 3125 with parameters:
```
{'batch_size': 32}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 157,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 384, 'out_features': 64, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ClaudeYang/awesome_fb_model | [
"pytorch",
"bart",
"text-classification",
"dataset:multi_nli",
"transformers",
"zero-shot-classification"
]
| zero-shot-classification | {
"architectures": [
"BartForSequenceClassification"
],
"model_type": "bart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | null | # Lingala Text-to-Speech
This model was trained on the OpenSLR's 71.6 hours aligned lingala bible dataset.
## Model description
A Conditional Variational Autoencoder with Adversarial Learning(VITS), which is an end-to-end approach to the text-to-speech task. To train the model, we used the espnet2 toolkit.
## Usage
First install espnet2
``` sh
pip install espnet
```
Download the model and the config files from this repo.
To generate a wav file using this model, run the following:
``` sh
from espnet2.bin.tts_inference import Text2Speech
import soundfile as sf
text2speech = Text2Speech(train_config="config.yaml",model_file="train.total_count.best.pth")
wav = text2speech("oyo kati na Ye ozwi lisiko mpe bolimbisi ya masumu")["wav"]
sf.write("outfile.wav", wav.numpy(), text2speech.fs, "PCM_16")
```
|
CleveGreen/FieldClassifier_v2 | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 46 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### 2d-art-sprites Dreambooth model trained by ana-tamais with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
You can test this model using this [Colab Notebook for Inference](https://colab.research.google.com/drive/1pFaEJHa7mxFruBfm2hDnR8S6aEo7sAFx?usp=sharing)
Sample pictures of 2dart concept:
<img src="https://huggingface.co/ana-tamais/2d-art-sprites/resolve/main/concept_images/2dart_1.png" width=256></img>
<img src="https://huggingface.co/ana-tamais/2d-art-sprites/resolve/main/concept_images/2dart_4.png" width=256></img>
<img src="https://huggingface.co/ana-tamais/2d-art-sprites/resolve/main/concept_images/2dart_9.png" width=256></img>
We saved the training data in `dataset.zip`, and some generated results in `results.zip`.
### Some recommendations
We recommend to set the:
- prompt as: `"[some wizard, paladin, healer, etc.], in the style of 2dart, white background, no background, full body"`
- negative prompt as: `"deformed, mutilated limbs, background, multiple people"`
- guidance scale between `9 and 10`
- sampling method as `Euler`
- sampling steps as `60`
- batch size as `1` - avoid high batch size, unless you have a high memory GPU
This set of hyperparameters lead us to stable and good results.
This model was trained using images which the character has some human format or is a not deformed living being. So if you try to predict something like "sword, mirror, candle, etc" (non-living things), we saw the model doesn't perform so well.
You need at least a Tesla T4 to be able to run the inference step using the given [notebook](https://colab.research.google.com/drive/1pFaEJHa7mxFruBfm2hDnR8S6aEo7sAFx?usp=sharing). |
CoachCarter/distilbert-base-uncased | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### Dan reynolds on Stable Diffusion via Dreambooth
#### model by JuandaSuarez
This your the Stable Diffusion model fine-tuned the Dan reynolds concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of sks dan reynolds**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:




|
CodeMonkey98/distilroberta-base-finetuned-wikitext2 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
widget:
- text: "François Dupont prends la direction générale du groupe IPD"
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: camembert-base-articles-ner-backup
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-articles-ner-backup
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6701
- F1: 0.8723
## Model description
This model identifies Name Entities : PERSON, ORGANISATION, JOB TITLE
Another Model is being developped to predict relationships between these entities (nomination, départure)
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.9205 | 1.0 | 6 | 1.7426 | 0.0 |
| 1.6476 | 2.0 | 12 | 1.5415 | 0.0 |
| 1.4607 | 3.0 | 18 | 1.3944 | 0.0635 |
| 1.3299 | 4.0 | 24 | 1.2587 | 0.4848 |
| 1.1973 | 5.0 | 30 | 1.1287 | 0.6207 |
| 1.0707 | 6.0 | 36 | 1.0110 | 0.8043 |
| 0.972 | 7.0 | 42 | 0.9266 | 0.8696 |
| 0.8877 | 8.0 | 48 | 0.8632 | 0.8602 |
| 0.8231 | 9.0 | 54 | 0.8279 | 0.8511 |
| 0.7723 | 10.0 | 60 | 0.8001 | 0.8511 |
| 0.7309 | 11.0 | 66 | 0.7617 | 0.8602 |
| 0.6902 | 12.0 | 72 | 0.7364 | 0.8602 |
| 0.6601 | 13.0 | 78 | 0.7104 | 0.8723 |
| 0.6306 | 14.0 | 84 | 0.7062 | 0.8723 |
| 0.6127 | 15.0 | 90 | 0.6896 | 0.8602 |
| 0.605 | 16.0 | 96 | 0.6743 | 0.8723 |
| 0.5892 | 17.0 | 102 | 0.6801 | 0.8723 |
| 0.5843 | 18.0 | 108 | 0.6797 | 0.8723 |
| 0.5731 | 19.0 | 114 | 0.6731 | 0.8723 |
| 0.5707 | 20.0 | 120 | 0.6701 | 0.8723 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
CodeNinja1126/bert-p-encoder | [
"pytorch"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language: eng
datasets:
- banking77
---
# Social Media Sentiment Analysis Model
This is a fine-tuned version of the Distilbert model. It's best suited for sentiment-analysis.
## Model Description
Social Media Sentiment Analysis Model was trained on the [dataset consisting of tweets](https://www.kaggle.com/code/mohamednabill7/sentiment-analysis-of-twitter-data/data) obtained from Kaggle."
## Intended Uses and Limitations
This model is meant for sentiment-analysis. Because it was trained on a corpus of tweets, it is familiar with social media jargons.
### How to use
You can use this model directly with a pipeline for text generation:
```python
>>>from transformers import pipeline
>>> model_name = "Kwaku/social_media_sa"
>>> generator = pipeline("sentiment-analysis", model=model_name)
>>> result = generator("I like this model")
>>> print(result)
Generated output: [{'label': 'positive', 'score': 0.9494990110397339}]
```
### Limitations and bias
This model inherits the bias of its parent, [Distilbert](https://huggingface.co/models?other=distilbert).
Besides that, it was trained on only 1000 randomly selected sequences, and thus does not achieve a high probability rate.
It does fairly well nonetheless. |
CodeNinja1126/xlm-roberta-large-kor-mrc | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"XLMRobertaForQuestionAnswering"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
language:
- ko
tags:
- trocr
- image-to-text
license: mit
metrics:
- wer
- cer
widget:
- src: https://raw.githubusercontent.com/aws-samples/sm-kornlp/main/trocr/sample_imgs/random_2.jpg
example_title: 랜덤 문장 1
- src: https://raw.githubusercontent.com/aws-samples/sm-kornlp/main/trocr/sample_imgs/random_6.jpg
example_title: 랜덤 문장 2
- src: https://raw.githubusercontent.com/aws-samples/sm-kornlp/main/trocr/sample_imgs/chatbot_3.jpg
example_title: 챗봇 1
- src: https://raw.githubusercontent.com/aws-samples/sm-kornlp/main/trocr/sample_imgs/chatbot_5.jpg
example_title: 챗봇 2
- src: https://raw.githubusercontent.com/aws-samples/sm-kornlp/main/trocr/sample_imgs/news_1.jpg
example_title: 뉴스 1
- src: https://raw.githubusercontent.com/aws-samples/sm-kornlp/main/trocr/sample_imgs/news_3.jpg
example_title: 뉴스 2
- src: https://raw.githubusercontent.com/aws-samples/sm-kornlp/main/trocr/sample_imgs/nsmc_1.jpg
example_title: 영화 리뷰 1
- src: https://raw.githubusercontent.com/aws-samples/sm-kornlp/main/trocr/sample_imgs/nsmc_2.jpg
example_title: 영화 리뷰 2
---
# TrOCR for Korean Language (PoC)
## Overview
TrOCR has not yet released a multilingual model including Korean, so we trained a Korean model for PoC purpose. Based on this model, it is recommended to collect more data to additionally train the 1st stage or perform fine-tuning as the 2nd stage.
## Collecting data
### Text data
We created training data by processing three types of datasets.
- News summarization dataset: https://huggingface.co/datasets/daekeun-ml/naver-news-summarization-ko
- Naver Movie Sentiment Classification: https://github.com/e9t/nsmc
- Chatbot dataset: https://github.com/songys/Chatbot_data
For efficient data collection, each sentence was separated by a sentence separator library (Kiwi Python wrapper; https://github.com/bab2min/kiwipiepy), and as a result, 637,401 samples were collected.
### Image Data
Image data was generated with TextRecognitionDataGenerator (https://github.com/Belval/TextRecognitionDataGenerator) introduced in the TrOCR paper.
Below is a code snippet for generating images.
```shell
python3 ./trdg/run.py -i ocr_dataset_poc.txt -w 5 -t {num_cores} -f 64 -l ko -c {num_samples} -na 2 --output_dir {dataset_dir}
```
## Training
### Base model
The encoder model used `facebook/deit-base-distilled-patch16-384` and the decoder model used `klue/roberta-base`. It is easier than training by starting weights from `microsoft/trocr-base-stage1`.
### Parameters
We used heuristic parameters without separate hyperparameter tuning.
- learning_rate = 4e-5
- epochs = 25
- fp16 = True
- max_length = 64
## Usage
### inference.py
```python
from transformers import TrOCRProcessor, VisionEncoderDecoderModel, AutoTokenizer
import requests
from io import BytesIO
from PIL import Image
processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten")
model = VisionEncoderDecoderModel.from_pretrained("daekeun-ml/ko-trocr-base-nsmc-news-chatbot")
tokenizer = AutoTokenizer.from_pretrained("daekeun-ml/ko-trocr-base-nsmc-news-chatbot")
url = "https://raw.githubusercontent.com/aws-samples/sm-kornlp/main/trocr/sample_imgs/news_1.jpg"
response = requests.get(url)
img = Image.open(BytesIO(response.content))
pixel_values = processor(img, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values, max_length=64)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
```
All the code required for data collection and model training has been published on the author's Github.
- https://github.com/daekeun-ml/sm-kornlp-usecases/tree/main/trocr |
CoderBoy432/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
language: eng
datasets:
- banking77
---
# Social Media Sentiment Analysis Model (Finetuned)
This is a fine-tuned version of the [Social Media Sentiment Analysis Model](https://huggingface.co/Kwaku/social_media_sa) which is a finetuned version of [Distilbert](https://huggingface.co/models?other=distilbert). It's best suited for sentiment-analysis.
## Model Description
Social Media Sentiment Analysis Model was trained on the [dataset consisting of tweets](https://www.kaggle.com/code/mohamednabill7/sentiment-analysis-of-twitter-data/data) obtained from Kaggle."
## Intended Uses and Limitations
This model is meant for sentiment-analysis. Because it was trained on a corpus of tweets, it is familiar with social media jargons.
### How to use
You can use this model directly with a pipeline for text generation:
```python
>>>from transformers import pipeline
>>> model_name = "Kwaku/social_media_sa_finetuned_1"
>>> generator = pipeline("sentiment-analysis", model=model_name)
>>> result = generator("I like this model")
>>> print(result)
Generated output: [{'label': 'positive', 'score': 0.9494990110397339}]
```
### Limitations and bias
This model inherits the bias of its parent, [Distilbert](https://huggingface.co/models?other=distilbert).
Besides that, it was trained on only 1000 randomly selected sequences, and thus does not achieve a high probability rate.
It does fairly well nonetheless. |
Venkatakrishnan-Ramesh/Text_gen | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | This is a wav2vec2-base model trained from selected bird songs in a birddb dataset.
```
import librosa
import torch
from transformers import Wav2Vec2ForPreTraining,Wav2Vec2Processor
sound_file = 'sample.wav'
sound_data,_ = librosa.load(sound_file, sr=16000)
model_id = "kojima-r/wav2vec2-base-birddb-small"
model = Wav2Vec2ForPreTraining.from_pretrained(model_id)
result=model(torch.tensor([sound_data]))
hidden_vecs=result.projected_states
```

|
CoffeeAddict93/gpt2-medium-modest-proposal | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: openrail
---
# MJv4 Hallucinations
These are 3 models trained on a small (<2000) dataset of Midjourney v4 images with no particular style. <b> These models are nowhere near as good as Midjourney v4 </b>, and they all suffer from a lot of "language drift" but they do have an interesting style. They are the best of something like 60 different models I trained as part of a set of experiments aimed at replicating Midjourney v4's style with only a few, uncaptioned images.
The models are:
- <b>mjg-4000-model.ckpt</b>: trained on 250 MJv4 images with no regularization for 4000 steps, prompt: "mjg style"
- <b>mjg-12000-model.ckpt</b>: trained on 250 MJv4 images with no regularization for 12000 steps, prompt: "mjg style"
- <b>mjv-1200-model.ckpt</b>: trained on 7 MJv4 images with 1000 regularization images for 1200 steps, prompt: "mjv style"
Models you can download are <b>bolded</b>
<img src="https://github.com/Lewington-pitsos/mj4-hallucinations/blob/main/compare.png?raw=true" width="100%"/>
In my subjective opinion, only <b>mjv-1200-model.ckpt<\b> is actually worth downloading.
## Credits:
- [NitroSock](https://github.com/nitrosocke/dreambooth-training-guide) for the regularization images
- [prompthero](https://huggingface.co/prompthero/openjourney) whose idea I copied
## Take Down
As far as I can tell, uploading these models does not cause any person or corporate entity any harm, but if you think I am wrong about this please reach out. |
CoffeeAddict93/gpt2-modest-proposal | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
CogComp/bart-faithful-summary-detector | [
"pytorch",
"jax",
"bart",
"text-classification",
"en",
"dataset:xsum",
"transformers",
"xsum",
"license:cc-by-sa-4.0"
]
| text-classification | {
"architectures": [
"BartForSequenceClassification"
],
"model_type": "bart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": 1,
"max_length": 128,
"min_length": 12,
"no_repeat_ngram_size": null,
"num_beams": 4,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 234 | null | WnB run: https://wandb.ai/jellywibble/huggingface/runs/1yo5mgs4?workspace=user-jellywibble |
CogComp/roberta-temporal-predictor | [
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.00436",
"transformers",
"license:mit",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
tags:
- generated_from_keras_callback
model-index:
- name: dung1308/dung_NT_model_save
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dung1308/dung_NT_model_save
This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.8144
- Validation Loss: 3.6030
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.4431 | 3.9985 | 0 |
| 3.9986 | 3.8016 | 1 |
| 3.8144 | 3.6030 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.7.0
- Tokenizers 0.11.0
|
CohleM/bert-nepali-tokenizer | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 155.33 +/- 58.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CohleM/mbert-nepali-tokenizer | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
tags:
- vision-language
- clip
- vilt
datasets:
- lil-lab/kilogram-data
---
KiloGram dataset and code repo: https://github.com/lil-lab/kilogram
Preprocessed training and evaluation data: https://huggingface.co/datasets/lil-lab/kilogram-data |
ComCom/gpt2-large | [
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"GPT2Model"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: apache-2.0
tags:
- vision
- depth-estimation
- generated_from_trainer
model-index:
- name: glpn-nyu-finetuned-diode-221122-014502
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glpn-nyu-finetuned-diode-221122-014502
This model is a fine-tuned version of [vinvino02/glpn-nyu](https://huggingface.co/vinvino02/glpn-nyu) on the diode-subset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3476
- Mae: 0.2763
- Rmse: 0.4088
- Abs Rel: 0.3308
- Log Mae: 0.1161
- Log Rmse: 0.1700
- Delta1: 0.5682
- Delta2: 0.8301
- Delta3: 0.9279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 48
- seed: 2022
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae | Rmse | Abs Rel | Log Mae | Log Rmse | Delta1 | Delta2 | Delta3 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:-------:|:--------:|:------:|:------:|:------:|
| 0.7598 | 1.0 | 72 | 0.5809 | 0.7606 | 0.9281 | 0.9834 | 0.2597 | 0.3064 | 0.1320 | 0.3250 | 0.6234 |
| 0.4481 | 2.0 | 144 | 0.4013 | 0.3507 | 0.4879 | 0.4181 | 0.1415 | 0.1950 | 0.4427 | 0.7602 | 0.9021 |
| 0.4066 | 3.0 | 216 | 0.3706 | 0.3081 | 0.4484 | 0.3675 | 0.1269 | 0.1823 | 0.5187 | 0.7977 | 0.9148 |
| 0.3965 | 4.0 | 288 | 0.3641 | 0.2987 | 0.4336 | 0.3607 | 0.1239 | 0.1787 | 0.5294 | 0.8072 | 0.9205 |
| 0.3942 | 5.0 | 360 | 0.3582 | 0.2903 | 0.4251 | 0.3490 | 0.1207 | 0.1753 | 0.5466 | 0.8165 | 0.9232 |
| 0.3575 | 6.0 | 432 | 0.3568 | 0.2898 | 0.4184 | 0.3569 | 0.1211 | 0.1753 | 0.5390 | 0.8171 | 0.9265 |
| 0.3418 | 7.0 | 504 | 0.3490 | 0.2771 | 0.4178 | 0.3248 | 0.1156 | 0.1707 | 0.5783 | 0.8312 | 0.9259 |
| 0.2916 | 8.0 | 576 | 0.3512 | 0.2819 | 0.4172 | 0.3373 | 0.1178 | 0.1725 | 0.5620 | 0.8253 | 0.9262 |
| 0.3055 | 9.0 | 648 | 0.3506 | 0.2808 | 0.4091 | 0.3422 | 0.1180 | 0.1718 | 0.5537 | 0.8248 | 0.9292 |
| 0.2932 | 10.0 | 720 | 0.3518 | 0.2809 | 0.4110 | 0.3441 | 0.1182 | 0.1724 | 0.5548 | 0.8239 | 0.9290 |
| 0.2518 | 11.0 | 792 | 0.3476 | 0.2756 | 0.4115 | 0.3265 | 0.1155 | 0.1700 | 0.5741 | 0.8326 | 0.9264 |
| 0.3177 | 12.0 | 864 | 0.3491 | 0.2784 | 0.4104 | 0.3333 | 0.1169 | 0.1706 | 0.5620 | 0.8290 | 0.9283 |
| 0.3038 | 13.0 | 936 | 0.3503 | 0.2795 | 0.4094 | 0.3410 | 0.1175 | 0.1717 | 0.5596 | 0.8275 | 0.9283 |
| 0.3299 | 14.0 | 1008 | 0.3460 | 0.2750 | 0.4098 | 0.3257 | 0.1154 | 0.1693 | 0.5721 | 0.8325 | 0.9283 |
| 0.3325 | 15.0 | 1080 | 0.3476 | 0.2763 | 0.4088 | 0.3308 | 0.1161 | 0.1700 | 0.5682 | 0.8301 | 0.9279 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu116
- Tokenizers 0.13.2
|
Craig/paraphrase-MiniLM-L6-v2 | [
"pytorch",
"bert",
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,026 | null | ---
license: apache-2.0
tags:
- vision
- depth-estimation
- generated_from_trainer
model-index:
- name: glpn-nyu-finetuned-diode-221122-044810
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glpn-nyu-finetuned-diode-221122-044810
This model is a fine-tuned version of [vinvino02/glpn-nyu](https://huggingface.co/vinvino02/glpn-nyu) on the diode-subset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3690
- Mae: 0.2909
- Rmse: 0.4208
- Abs Rel: 0.3635
- Log Mae: 0.1224
- Log Rmse: 0.1793
- Delta1: 0.5323
- Delta2: 0.8179
- Delta3: 0.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 48
- seed: 2022
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae | Rmse | Abs Rel | Log Mae | Log Rmse | Delta1 | Delta2 | Delta3 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:-------:|:--------:|:------:|:------:|:------:|
| 1.3864 | 1.0 | 72 | 1.2016 | 3.4656 | 3.5204 | 5.1101 | 0.6881 | 0.7346 | 0.0 | 0.0011 | 0.0764 |
| 1.0082 | 2.0 | 144 | 0.4607 | 0.4107 | 0.5420 | 0.5254 | 0.1697 | 0.2234 | 0.3596 | 0.6460 | 0.8465 |
| 0.4656 | 3.0 | 216 | 0.4071 | 0.3431 | 0.4758 | 0.4359 | 0.1425 | 0.1992 | 0.4567 | 0.7481 | 0.8958 |
| 0.4093 | 4.0 | 288 | 0.3953 | 0.3261 | 0.4622 | 0.4197 | 0.1363 | 0.1947 | 0.4841 | 0.7624 | 0.9103 |
| 0.392 | 5.0 | 360 | 0.3916 | 0.3211 | 0.4463 | 0.4116 | 0.1338 | 0.1896 | 0.4810 | 0.7756 | 0.9176 |
| 0.3466 | 6.0 | 432 | 0.3807 | 0.3075 | 0.4451 | 0.3658 | 0.1293 | 0.1839 | 0.5026 | 0.7921 | 0.9180 |
| 0.3297 | 7.0 | 504 | 0.3811 | 0.3047 | 0.4448 | 0.3534 | 0.1290 | 0.1835 | 0.5066 | 0.7920 | 0.9137 |
| 0.2768 | 8.0 | 576 | 0.3779 | 0.3057 | 0.4283 | 0.3894 | 0.1280 | 0.1832 | 0.5046 | 0.7996 | 0.9256 |
| 0.2849 | 9.0 | 648 | 0.3753 | 0.2978 | 0.4341 | 0.3496 | 0.1259 | 0.1806 | 0.5149 | 0.8041 | 0.9184 |
| 0.2571 | 10.0 | 720 | 0.3825 | 0.3068 | 0.4305 | 0.3896 | 0.1289 | 0.1849 | 0.4998 | 0.7974 | 0.9206 |
| 0.2246 | 11.0 | 792 | 0.3718 | 0.2951 | 0.4235 | 0.3678 | 0.1240 | 0.1803 | 0.5249 | 0.8105 | 0.9248 |
| 0.2703 | 12.0 | 864 | 0.3716 | 0.2945 | 0.4317 | 0.3593 | 0.1235 | 0.1808 | 0.5324 | 0.8122 | 0.9215 |
| 0.2596 | 13.0 | 936 | 0.3692 | 0.2921 | 0.4185 | 0.3690 | 0.1229 | 0.1798 | 0.5294 | 0.8167 | 0.9264 |
| 0.2604 | 14.0 | 1008 | 0.3684 | 0.2893 | 0.4171 | 0.3601 | 0.1223 | 0.1785 | 0.5325 | 0.8179 | 0.9252 |
| 0.2679 | 15.0 | 1080 | 0.3690 | 0.2909 | 0.4208 | 0.3635 | 0.1224 | 0.1793 | 0.5323 | 0.8179 | 0.9258 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu116
- Tokenizers 0.13.2
|
Cthyllax/DialoGPT-medium-PaladinDanse | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
license: creativeml-openrail-m
---
Made using highly curated best quality masterful artwork from an ancient indonesian stone carving website, with some help from their independent doodling connoisseur brothers in arms, 3000 pieces of their best work.
Prompt used: aiseeic
aisee_10000.ckpt was made with Anything v.3.
aiseeic_15000.ckpt was made with SD 1.5.
AIsee (Anything) examples



AIsee SD examples



I own nothing and I will be happy.
|
DJSammy/bert-base-swedish-uncased_BotXO-ai | [
"pytorch",
"transformers"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.6346626984126984
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.32887700534759357
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3264094955489614
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.47581989994441354
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.464
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.37719298245614036
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.36342592592592593
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7761036612927528
- name: F1 (macro)
type: f1_macro
value: 0.7415561766602355
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7328638497652582
- name: F1 (macro)
type: f1_macro
value: 0.47573763054929613
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.5390032502708559
- name: F1 (macro)
type: f1_macro
value: 0.49194003623703636
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8753564721430062
- name: F1 (macro)
type: f1_macro
value: 0.7536524804914483
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8282670009401442
- name: F1 (macro)
type: f1_macro
value: 0.8236645741563291
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.32887700534759357
- Accuracy on SAT: 0.3264094955489614
- Accuracy on BATS: 0.47581989994441354
- Accuracy on U2: 0.37719298245614036
- Accuracy on U4: 0.36342592592592593
- Accuracy on Google: 0.464
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.7761036612927528
- Micro F1 score on CogALexV: 0.7328638497652582
- Micro F1 score on EVALution: 0.5390032502708559
- Micro F1 score on K&H+N: 0.8753564721430062
- Micro F1 score on ROOT09: 0.8282670009401442
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.6346626984126984
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
DaisyMak/bert-finetuned-squad-accelerate-10epoch_transformerfrozen | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,907 | null | ---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.7048015873015873
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.37967914438502676
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3916913946587537
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5347415230683713
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.69
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.41228070175438597
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3888888888888889
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.853246948922706
- name: F1 (macro)
type: f1_macro
value: 0.8485536876305343
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8044600938967136
- name: F1 (macro)
type: f1_macro
value: 0.5726819680585065
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.5839653304442037
- name: F1 (macro)
type: f1_macro
value: 0.5524953070884607
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.934687347847256
- name: F1 (macro)
type: f1_macro
value: 0.8063588254058023
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8279536195549985
- name: F1 (macro)
type: f1_macro
value: 0.7955713493721125
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.37967914438502676
- Accuracy on SAT: 0.3916913946587537
- Accuracy on BATS: 0.5347415230683713
- Accuracy on U2: 0.41228070175438597
- Accuracy on U4: 0.3888888888888889
- Accuracy on Google: 0.69
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.853246948922706
- Micro F1 score on CogALexV: 0.8044600938967136
- Micro F1 score on EVALution: 0.5839653304442037
- Micro F1 score on K&H+N: 0.934687347847256
- Micro F1 score on ROOT09: 0.8279536195549985
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7048015873015873
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 10
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 1
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
DaisyMak/bert-finetuned-squad-transformerfrozen-testtoken | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.6670436507936508
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3770053475935829
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.37388724035608306
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4802668148971651
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.558
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.33771929824561403
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.34953703703703703
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.893174627090553
- name: F1 (macro)
type: f1_macro
value: 0.8866591988732194
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7863849765258216
- name: F1 (macro)
type: f1_macro
value: 0.5308624907920565
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.5704225352112676
- name: F1 (macro)
type: f1_macro
value: 0.5510856788391408
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9581275648605412
- name: F1 (macro)
type: f1_macro
value: 0.8644516035001516
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8523973675963648
- name: F1 (macro)
type: f1_macro
value: 0.8523947470987124
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.3770053475935829
- Accuracy on SAT: 0.37388724035608306
- Accuracy on BATS: 0.4802668148971651
- Accuracy on U2: 0.33771929824561403
- Accuracy on U4: 0.34953703703703703
- Accuracy on Google: 0.558
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.893174627090553
- Micro F1 score on CogALexV: 0.7863849765258216
- Micro F1 score on EVALution: 0.5704225352112676
- Micro F1 score on K&H+N: 0.9581275648605412
- Micro F1 score on ROOT09: 0.8523973675963648
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.6670436507936508
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 5
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
DamolaMack/Classyfied | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8018650793650793
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3502673796791444
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.35014836795252224
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5202890494719289
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.644
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.39035087719298245
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.43287037037037035
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8461654361910502
- name: F1 (macro)
type: f1_macro
value: 0.8411664963735426
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8145539906103286
- name: F1 (macro)
type: f1_macro
value: 0.5873414064116238
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6505958829902492
- name: F1 (macro)
type: f1_macro
value: 0.6269958308732405
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9319051262433052
- name: F1 (macro)
type: f1_macro
value: 0.8393686548194149
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7511751801942964
- name: F1 (macro)
type: f1_macro
value: 0.6464435364634403
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.3502673796791444
- Accuracy on SAT: 0.35014836795252224
- Accuracy on BATS: 0.5202890494719289
- Accuracy on U2: 0.39035087719298245
- Accuracy on U4: 0.43287037037037035
- Accuracy on Google: 0.644
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8461654361910502
- Micro F1 score on CogALexV: 0.8145539906103286
- Micro F1 score on EVALution: 0.6505958829902492
- Micro F1 score on K&H+N: 0.9319051262433052
- Micro F1 score on ROOT09: 0.7511751801942964
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8018650793650793
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 1
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
Danbi/distilgpt2-finetuned-wikitext2 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: agpl-3.0
language:
- gl
- pt
widget:
- text: >-
A miña amiga Rosa, de Lisboa, estudou en Montreal. Agora traballa en Nova
Pescanova.
---
# Named Entity Recognition (NER) model for Galician
This is a NER model for Galician (ILG/RAG spelling) which uses the standard 'enamex' classes: LOC (geographical locations); PER (people); ORG (organizations); MISC (other entities).
The model is based on [BERT-base-gl-cased](https://huggingface.co/marcosgg/bert-base-gl-cased), which has been fine-tuned using custom splits of the [SLI_NERC dataset](https://github.com/xavier-gz/SLI_Galician_Corpora). On the test split of this dataset (not used for training), the model obtained the following results (Precision/Recall/F-score): 87.69 / 89.7 / 88.68. |
Davlan/xlm-roberta-base-finetuned-lingala | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-idrak-practice
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-idrak-practice
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3538
- Wer: 0.3209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.634 | 0.87 | 500 | 2.6452 | 1.0 |
| 1.0497 | 1.73 | 1000 | 0.5711 | 0.5138 |
| 0.4584 | 2.6 | 1500 | 0.4421 | 0.4492 |
| 0.3198 | 3.46 | 2000 | 0.3818 | 0.3941 |
| 0.2263 | 4.33 | 2500 | 0.3653 | 0.3767 |
| 0.1845 | 5.19 | 3000 | 0.3424 | 0.3661 |
| 0.1388 | 6.06 | 3500 | 0.3702 | 0.3519 |
| 0.1214 | 6.92 | 4000 | 0.3515 | 0.3439 |
| 0.1026 | 7.79 | 4500 | 0.3585 | 0.3292 |
| 0.0834 | 8.65 | 5000 | 0.3474 | 0.3236 |
| 0.0737 | 9.52 | 5500 | 0.3538 | 0.3209 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu116
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Dazai/Ko | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
duplicated_from: hf-internal-testing/tiny-stable-diffusion-torch
---
```python
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("hf-internal-testing/tiny-stable-diffusion-torch")
```
|
Declan/Breitbart_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
---
Use prompt: '**btdmnky**' to get a monkey. You can use the categories in the game to generate a monkey based on that category, such as putting "btdmnky magic" will generate a monkey based on the magic monkeys in-game.
You can use:
- primary
- military
- magic
- support (results won't be great)
- hero
Some examples:
<font size="1">magic hero, godly, laser beams, god rays, dominant pose, monkey, cloak, super powers, Magic The Gathering, magical, fantasy, colorful, realistic
Negative prompt: lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, deformed, mutated, extra limbs,
Steps: 35, Sampler: Euler, CFG scale: 6.5, Seed: 235226828, Size: 512x512</font>

<font size="1">btdmnky hero magic style, high quality, digital art, monkey, godly, god, powerful
Negative prompt: jagged, jaggy lines, poorly drawn, low quality, (((text))), pixelated, jpeg artifacts, messy, deformed, mutated, extra limbs, extra tails
Steps: 40, Sampler: DPM++ 2S a, CFG scale: 7, Seed: 2116075235, Size: 512x512</font>

<font size="1">btdmnky primary style, high quality, digital art, monkey
Negative prompt: jagged, jaggy lines, poorly drawn, low quality, (((text))), pixelated, jpeg artifacts, messy, deformed, mutated, extra limbs, extra tails
Steps: 40, Sampler: DPM++ 2S a, CFG scale: 7, Seed: 3160304320, Size: 512x512</font>

<font size="1">btdmnky hero style, high quality, digital art, monkey
Negative prompt: jagged, jaggy lines, poorly drawn, low quality, (((text))), pixelated, jpeg artifacts, messy, deformed, mutated, extra limbs, extra tails
Steps: 40, Sampler: DPM++ 2S a, CFG scale: 7, Seed: 968959303, Size: 512x512</font>

<font size="1">btdmnky magic style, high quality, cat
Negative prompt: jagged, jaggy lines, poorly drawn, low quality, (((text))), pixelated, jpeg artifacts
Steps: 40, Sampler: Euler, CFG scale: 7, Seed: 2591613767, Size: 512x512</font>
 |
Declan/ChicagoTribune_model_v7 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: chile-gpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chile-gpt
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 9.4320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 10.6676 | 0.98 | 6 | 9.5748 |
| 9.6237 | 1.98 | 12 | 9.2470 |
| 9.2815 | 2.98 | 18 | 8.8724 |
| 8.8097 | 3.98 | 24 | 8.3629 |
| 8.2296 | 4.98 | 30 | 7.8407 |
| 7.6891 | 5.98 | 36 | 7.4161 |
| 7.3013 | 6.98 | 42 | 7.1598 |
| 7.0671 | 7.98 | 48 | 7.0080 |
| 6.9404 | 8.98 | 54 | 6.9133 |
| 6.7543 | 9.98 | 60 | 6.7723 |
| 6.5845 | 10.98 | 66 | 6.6619 |
| 6.4193 | 11.98 | 72 | 6.5965 |
| 6.2554 | 12.98 | 78 | 6.5185 |
| 6.0993 | 13.98 | 84 | 6.4632 |
| 5.93 | 14.98 | 90 | 6.4155 |
| 5.7684 | 15.98 | 96 | 6.4183 |
| 5.6242 | 16.98 | 102 | 6.3981 |
| 5.4577 | 17.98 | 108 | 6.4609 |
| 5.2898 | 18.98 | 114 | 6.4577 |
| 5.1113 | 19.98 | 120 | 6.5617 |
| 4.9319 | 20.98 | 126 | 6.5827 |
| 4.7464 | 21.98 | 132 | 6.6961 |
| 4.5505 | 22.98 | 138 | 6.8359 |
| 4.341 | 23.98 | 144 | 6.9193 |
| 4.1324 | 24.98 | 150 | 7.0325 |
| 3.8938 | 25.98 | 156 | 7.1993 |
| 3.6691 | 26.98 | 162 | 7.3179 |
| 3.4316 | 27.98 | 168 | 7.4708 |
| 3.2041 | 28.98 | 174 | 7.5654 |
| 2.9614 | 29.98 | 180 | 7.7535 |
| 2.7189 | 30.98 | 186 | 7.8551 |
| 2.4944 | 31.98 | 192 | 8.0094 |
| 2.2624 | 32.98 | 198 | 8.0527 |
| 2.0292 | 33.98 | 204 | 8.1857 |
| 1.809 | 34.98 | 210 | 8.3468 |
| 1.597 | 35.98 | 216 | 8.4307 |
| 1.3849 | 36.98 | 222 | 8.6230 |
| 1.2081 | 37.98 | 228 | 8.6666 |
| 1.0273 | 38.98 | 234 | 8.7926 |
| 0.8661 | 39.98 | 240 | 8.8861 |
| 0.7308 | 40.98 | 246 | 8.9042 |
| 0.6189 | 41.98 | 252 | 8.9202 |
| 0.5335 | 42.98 | 258 | 9.0861 |
| 0.459 | 43.98 | 264 | 9.1198 |
| 0.3958 | 44.98 | 270 | 9.2129 |
| 0.3587 | 45.98 | 276 | 9.2434 |
| 0.3222 | 46.98 | 282 | 9.3005 |
| 0.2948 | 47.98 | 288 | 9.3961 |
| 0.2677 | 48.98 | 294 | 9.4605 |
| 0.2348 | 49.98 | 300 | 9.4320 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+rocm5.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Declan/NPR_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- LiveEvil/autotrain-data-copuml-la-beta-demo
co2_eq_emissions:
emissions: 1.2815143214785873
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 2205770755
- CO2 Emissions (in grams): 1.2815
## Validation Metrics
- Loss: 1.085
- Accuracy: 0.747
- Macro F1: 0.513
- Micro F1: 0.747
- Weighted F1: 0.715
- Macro Precision: 0.533
- Micro Precision: 0.747
- Weighted Precision: 0.691
- Macro Recall: 0.515
- Micro Recall: 0.747
- Weighted Recall: 0.747
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/LiveEvil/autotrain-copuml-la-beta-demo-2205770755
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("LiveEvil/autotrain-copuml-la-beta-demo-2205770755", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("LiveEvil/autotrain-copuml-la-beta-demo-2205770755", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Declan/NPR_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- generated_from_trainer
datasets:
- ebiquity-v2
model-index:
- name: enlmr-conll2003
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# enlmr-conll2003
This model is a fine-tuned version of [manirai91/enlm-roberta-final](https://huggingface.co/manirai91/enlm-roberta-final) on the ebiquity-v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Declan/Politico_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2022-11-22T17:53:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-the_verge-linustechtips-two_min
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-the_verge-linustechtips-two_min
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Declan/Politico_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-11-22T18:13:52Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Modified-Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 16.10 +/- 10.73
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
DeepPavlov/xlm-roberta-large-en-ru | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"en",
"ru",
"transformers"
]
| feature-extraction | {
"architectures": [
"XLMRobertaModel"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 190 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-125M-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125M-wikitext2
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 259 | 6.4308 |
| 6.8563 | 2.0 | 518 | 6.0898 |
| 6.8563 | 3.0 | 777 | 6.0325 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Denny29/DialoGPT-medium-asunayuuki | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2022-11-22T22:24:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-multilingual-cased-sv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-sv2
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
DeskDown/MarianMixFT_en-fil | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2022-11-22T22:30:20Z |
---
language:
- pt
thumbnail: "Portuguese BERT for the Legal Domain"
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- transformers
datasets:
- assin
- assin2
- stjiris/portuguese-legal-sentences-v1.0
widget:
- source_sentence: "O advogado apresentou as provas ao juíz."
sentences:
- "O juíz leu as provas."
- "O juíz leu o recurso."
- "O juíz atirou uma pedra."
example_title: "Example 1"
model-index:
- name: BERTimbau
results:
- task:
name: STS
type: STS
metrics:
- name: Pearson Correlation - assin Dataset
type: Pearson Correlation
value: 0.7716333759993093
- name: Pearson Correlation - assin2 Dataset
type: Pearson Correlation
value: 0.8403302138785704
- name: Pearson Correlation - stsb_multi_mt pt Dataset
type: Pearson Correlation
value: 0.8249826985133595
---
# stjiris/bert-large-portuguese-cased-legal-mlm-sts-v1.0 (Legal BERTimbau)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
stjiris/bert-large-portuguese-cased-legal-mlm-sts-v1.0 derives from [BERTimbau](https://huggingface.co/neuralmind/bert-large-portuguese-cased) large.
It was trained using the MLM technique with a learning rate 3e-5 [Legal Sentences from +-30000 documents](https://huggingface.co/datasets/stjiris/portuguese-legal-sentences-v1.0) 130k training steps (best performance for our semantic search system implementation)
It is adapted to the Portuguese legal domain and trained for STS on portuguese datasets. [assin](https://huggingface.co/datasets/assin), [assin2](https://huggingface.co/datasets/assin2) and [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt) portuguese subdataset
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Isto é um exemplo", "Isto é um outro exemplo"]
model = SentenceTransformer('stjiris/bert-large-portuguese-cased-legal-mlm-sts-v1.0')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('stjiris/bert-large-portuguese-cased-legal-mlm-sts-v1.0')
model = AutoModel.from_pretrained('stjiris/bert-large-portuguese-cased-legal-mlm-sts-v1.0')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1028, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
If you use this work, please cite:
```bibtex
@inproceedings{MeloSemantic,
author = {Melo, Rui and Santos, Professor Pedro Alexandre and Dias, Professor Jo{\~ a}o},
title = {A {Semantic} {Search} {System} for {Supremo} {Tribunal} de {Justi}{\c c}a},
}
@inproceedings{souza2020bertimbau,
author = {F{\'a}bio Souza and
Rodrigo Nogueira and
Roberto Lotufo},
title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
year = {2020}
}
@inproceedings{fonseca2016assin,
title={ASSIN: Avaliacao de similaridade semantica e inferencia textual},
author={Fonseca, E and Santos, L and Criscuolo, Marcelo and Aluisio, S},
booktitle={Computational Processing of the Portuguese Language-12th International Conference, Tomar, Portugal},
pages={13--15},
year={2016}
}
@inproceedings{real2020assin,
title={The assin 2 shared task: a quick overview},
author={Real, Livy and Fonseca, Erick and Oliveira, Hugo Goncalo},
booktitle={International Conference on Computational Processing of the Portuguese Language},
pages={406--412},
year={2020},
organization={Springer}
}
@InProceedings{huggingface:dataset:stsb_multi_mt,
title = {Machine translated multilingual STS benchmark dataset.},
author={Philip May},
year={2021},
url={https://github.com/PhilipMay/stsb-multi-mt}
}
``` |
DeskDown/MarianMixFT_en-id | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2022-11-22T22:35:19Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: xlm-roberta-conll2003
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-conll2003
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
DeskDown/MarianMixFT_en-ms | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2022-11-22T22:47:21Z | ---
license: mit
tags:
- image-to-text
- image-to-image
- text-to-image
- text-to-text
- image-editing
- image-variation
- generation
- vision
datasets:
- Laion2B-en
widget:
- text: "A high tech solarpunk utopia in the Amazon rainforest"
example_title: Amazon rainforest
---
# Versatile Diffusion V1.0 Model Card
We built **Versatile Diffusion (VD), the first unified multi-flow multimodal diffusion framework**, as a step towards **Universal Generative AI**. Versatile Diffusion can natively support image-to-text, image-variation, text-to-image, and text-variation, and can be further extended to other applications such as semantic-style disentanglement, image-text dual-guided generation, latent image-to-text-to-image editing, and more. Future versions will support more modalities such as speech, music, video and 3D.
Resources for more information: [GitHub](https://github.com/SHI-Labs/Versatile-Diffusion), [arXiv](https://arxiv.org/abs/2211.08332).
# Model Details
One single flow of Versatile Diffusion contains a VAE, a diffuser, and a context encoder, and thus handles one task (e.g., text-to-image) under one data type (e.g., image) and one context type (e.g., text). The multi-flow structure of Versatile Diffusion shows in the following diagram:
<p align="center">
<img src="https://huggingface.co/shi-labs/versatile-diffusion-model/resolve/main/assets/figures/vd_combined.png" width="99%">
</p>
- **Developed by:** Xingqian Xu, Atlas Wang, Eric Zhang, Kai Wang, and Humphrey Shi
- **Model type:** Diffusion-based multimodal generation model
- **Language(s):** English
- **License:** MIT
- **Resources for more information:** [GitHub Repository](https://github.com/SHI-Labs/Versatile-Diffusion), [Paper](https://arxiv.org/abs/2211.08332).
- **Cite as:**
```
@article{xu2022versatile,
title = {Versatile Diffusion: Text, Images and Variations All in One Diffusion Model},
author = {Xingqian Xu, Zhangyang Wang, Eric Zhang, Kai Wang, Humphrey Shi},
year = 2022,
url = {https://arxiv.org/abs/2211.08332},
eprint = {2211.08332},
archiveprefix = {arXiv},
primaryclass = {cs.CV}
}
```
# Usage
You can use the model both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [SHI-Labs Versatile Diffusion codebase](https://github.com/SHI-Labs/Versatile-Diffusion).
## 🧨 Diffusers
Diffusers let's you both use a unified and more memory-efficient, task-specific pipelines.
**Make sure to install `transformers` from `"main"` in order to use this model.**:
```
pip install git+https://github.com/huggingface/transformers
```
## VersatileDiffusionPipeline
To use Versatile Diffusion for all tasks, it is recommend to use the [`VersatileDiffusionPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/versatile_diffusion#diffusers.VersatileDiffusionPipeline)
```py
#! pip install git+https://github.com/huggingface/transformers diffusers torch
from diffusers import VersatileDiffusionPipeline
import torch
import requests
from io import BytesIO
from PIL import Image
pipe = VersatileDiffusionPipeline.from_pretrained("shi-labs/versatile-diffusion", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
# prompt
prompt = "a red car"
# initial image
url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg"
response = requests.get(url)
image = Image.open(BytesIO(response.content)).convert("RGB")
# text to image
image = pipe.text_to_image(prompt).images[0]
# image variation
image = pipe.image_variation(image).images[0]
# image variation
image = pipe.dual_guided(prompt, image).images[0]
```
### Task Specific
The task specific pipelines load only the weights that are needed onto GPU.
You can find all task specific pipelines [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/versatile_diffusion#versatilediffusion).
You can use them as follows:
### Text to Image
```py
from diffusers import VersatileDiffusionTextToImagePipeline
import torch
pipe = VersatileDiffusionTextToImagePipeline.from_pretrained("shi-labs/versatile-diffusion", torch_dtype=torch.float16)
pipe.remove_unused_weights()
pipe = pipe.to("cuda")
generator = torch.Generator(device="cuda").manual_seed(0)
image = pipe("an astronaut riding on a horse on mars", generator=generator).images[0]
image.save("./astronaut.png")
```
#### Image variations
```py
from diffusers import VersatileDiffusionImageVariationPipeline
import torch
import requests
from io import BytesIO
from PIL import Image
# download an initial image
url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg"
response = requests.get(url)
image = Image.open(BytesIO(response.content)).convert("RGB")
pipe = VersatileDiffusionImageVariationPipeline.from_pretrained("shi-labs/versatile-diffusion", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
generator = torch.Generator(device="cuda").manual_seed(0)
image = pipe(image, generator=generator).images[0]
image.save("./car_variation.png")
```
#### Dual-guided generation
```py
from diffusers import VersatileDiffusionDualGuidedPipeline
import torch
import requests
from io import BytesIO
from PIL import Image
# download an initial image
url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg"
response = requests.get(url)
image = Image.open(BytesIO(response.content)).convert("RGB")
text = "a red car in the sun"
pipe = VersatileDiffusionDualGuidedPipeline.from_pretrained("shi-labs/versatile-diffusion", torch_dtype=torch.float16)
pipe.remove_unused_weights()
pipe = pipe.to("cuda")
generator = torch.Generator(device="cuda").manual_seed(0)
text_to_image_strength = 0.75
image = pipe(prompt=text, image=image, text_to_image_strength=text_to_image_strength, generator=generator).images[0]
image.save("./red_car.png")
```
### Original GitHub Repository
Follow the instructions [here](https://github.com/SHI-Labs/Versatile-Diffusion/#evaluation).
# Cautions, Biases, and Content Acknowledgment
We would like the raise the awareness of users of this demo of its potential issues and concerns. Like previous large foundation models, Versatile Diffusion could be problematic in some cases, partially due to the imperfect training data and pretrained network (VAEs / context encoders) with limited scope. In its future research phase, VD may do better on tasks such as text-to-image, image-to-text, etc., with the help of more powerful VAEs, more sophisticated network designs, and more cleaned data. So far, we have kept all features available for research testing both to show the great potential of the VD framework and to collect important feedback to improve the model in the future. We welcome researchers and users to report issues with the HuggingFace community discussion feature or email the authors.
Beware that VD may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography, and violence. VD was trained on the LAION-2B dataset, which scraped non-curated online images and text, and may contain unintended exceptions as we removed illegal content. VD in this demo is meant only for research purposes. |
DeskDown/MarianMixFT_en-th | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2022-11-22T23:09:35Z | ---
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/tuwonga/dbluth/resolve/main/dbluth_prev1.jpg"
tags:
- stable-diffusion
- text-to-image
---
### dbluth
I played a lot in my childhood at laser disc videogames so this model is my personal tribute to the great Disney animator Don Bluth.This is a fine-tuned Stable Diffusion model (based on v1.5), I've trained three different models from videogames laser disc **Dragon's Lair** , **Space Ace** and **Dragon's Lair II Time Warp** then I merged these models into a single one called dbluth.
Use the token **_dbluth_** in your prompts to use the style.
_Download the ckpt file from "files and versions" tab into the stable diffusion models folder of your web-ui of choice._
The model is pretty similar to Disney classic model because of course Don Bluth was one of the main animator in classic Disney era.
_I've found interesting the output in the img2img generation. You can see the results in the second image (original/img2img)._
**Characters and rendered with this model:**

_prompt and settings used: **[person] in dbluth style** | **Steps: 30, Sampler: Euler, CFG scale: 7.5**_
**Characters rendered with img2img:**

_prompt and settings used: **[person] in dbluth style** | **Steps: 30 - denoising stregth around 50/70 but you can play around with settings**_
--
This model was trained with Dreambooth training by TheLastBen, using 40 images at 8000 steps with 20% of text encoder for each model and then merged in a single one with Automatic1111 webui checkpoint merger.
--
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
DeskDown/MarianMix_en-ja-10 | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | 2022-11-22T23:16:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: mbert-conll2003
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert-conll2003
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Dibyaranjan/nl_image_search | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-23T00:36:55Z | ---
license: mit
---
### Alberto_Montt on Stable Diffusion
This is the `<AlbertoMontt>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:






|
DicoTiar/wisdomfiy | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2022-11-23T00:38:10Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### American Flag Cowboy Hat on Stable Diffusion via Dreambooth
#### model by aakamishra
This your the Stable Diffusion model fine-tuned the American Flag Cowboy Hat concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of sks hat**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:





|
Dilmk2/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | 2022-11-23T00:50:28Z | ---
license: other
tags:
- computer_vision
- pose_estimation
---
Copyright 2021-2023 by Mackenzie Mathis, Alexander Mathis, Shaokai Ye and contributors. All rights reserved.
- Non-commercial use only is permitted
- please cite Ye et al if you use this model in your work https://arxiv.org/abs/2203.07436v1
- If this license is not suitable for your business or project
please contact EPFL-TTO (https://tto.epfl.ch/) for a full commercial license.
This software may not be used to harm any animal deliberately. |
DingleyMaillotUrgell/homer-bot | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | 2022-11-23T01:25:59Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: reco-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reco-ner
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0668
- Precision: 0.8125
- Recall: 0.8790
- F1: 0.8444
- Accuracy: 0.9819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4516 | 1.0 | 626 | 0.4047 | 0.4332 | 0.4564 | 0.4445 | 0.8980 |
| 0.3677 | 2.0 | 1252 | 0.2774 | 0.4918 | 0.5731 | 0.5293 | 0.9193 |
| 0.2892 | 3.0 | 1878 | 0.2133 | 0.6139 | 0.6581 | 0.6353 | 0.9384 |
| 0.2736 | 4.0 | 2504 | 0.1772 | 0.6248 | 0.6854 | 0.6537 | 0.9488 |
| 0.221 | 5.0 | 3130 | 0.1503 | 0.6295 | 0.7328 | 0.6772 | 0.9560 |
| 0.1569 | 6.0 | 3756 | 0.1283 | 0.6821 | 0.8108 | 0.7409 | 0.9623 |
| 0.1534 | 7.0 | 4382 | 0.0995 | 0.7412 | 0.8119 | 0.7749 | 0.9708 |
| 0.089 | 8.0 | 5008 | 0.0846 | 0.7695 | 0.8353 | 0.8010 | 0.9760 |
| 0.0923 | 9.0 | 5634 | 0.0743 | 0.7881 | 0.8740 | 0.8289 | 0.9789 |
| 0.0711 | 10.0 | 6260 | 0.0668 | 0.8125 | 0.8790 | 0.8444 | 0.9819 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DongHai/DialoGPT-small-rick | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2022-11-23T01:50:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-demo-M02-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-demo-M02-2
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2709
- Wer: 1.0860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 23.4917 | 0.91 | 500 | 3.2945 | 1.0 |
| 3.4102 | 1.81 | 1000 | 3.1814 | 1.0 |
| 2.9438 | 2.72 | 1500 | 2.7858 | 1.0 |
| 2.6698 | 3.62 | 2000 | 2.4745 | 1.0035 |
| 1.9542 | 4.53 | 2500 | 1.8675 | 1.3745 |
| 1.2737 | 5.43 | 3000 | 1.6459 | 1.3703 |
| 0.9748 | 6.34 | 3500 | 1.8406 | 1.3037 |
| 0.7696 | 7.25 | 4000 | 1.5086 | 1.2476 |
| 0.6396 | 8.15 | 4500 | 1.8280 | 1.2476 |
| 0.558 | 9.06 | 5000 | 1.7680 | 1.2247 |
| 0.4865 | 9.96 | 5500 | 1.8210 | 1.2309 |
| 0.4244 | 10.87 | 6000 | 1.7910 | 1.1775 |
| 0.3898 | 11.78 | 6500 | 1.8021 | 1.1831 |
| 0.3456 | 12.68 | 7000 | 1.7746 | 1.1456 |
| 0.3349 | 13.59 | 7500 | 1.8969 | 1.1519 |
| 0.3233 | 14.49 | 8000 | 1.7402 | 1.1234 |
| 0.3046 | 15.4 | 8500 | 1.8585 | 1.1429 |
| 0.2622 | 16.3 | 9000 | 1.6687 | 1.0950 |
| 0.2593 | 17.21 | 9500 | 1.8192 | 1.1144 |
| 0.2541 | 18.12 | 10000 | 1.8665 | 1.1110 |
| 0.2098 | 19.02 | 10500 | 1.9996 | 1.1186 |
| 0.2192 | 19.93 | 11000 | 2.0346 | 1.1040 |
| 0.1934 | 20.83 | 11500 | 2.1924 | 1.1012 |
| 0.2034 | 21.74 | 12000 | 1.8060 | 1.0929 |
| 0.1857 | 22.64 | 12500 | 2.0334 | 1.0798 |
| 0.1819 | 23.55 | 13000 | 2.1223 | 1.1040 |
| 0.1621 | 24.46 | 13500 | 2.1795 | 1.0957 |
| 0.1548 | 25.36 | 14000 | 2.1545 | 1.1089 |
| 0.1512 | 26.27 | 14500 | 2.2707 | 1.1186 |
| 0.1472 | 27.17 | 15000 | 2.1698 | 1.0888 |
| 0.1296 | 28.08 | 15500 | 2.2496 | 1.0867 |
| 0.1312 | 28.99 | 16000 | 2.2969 | 1.0881 |
| 0.1331 | 29.89 | 16500 | 2.2709 | 1.0860 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
Dongjae/mrc2reader | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"XLMRobertaForQuestionAnswering"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2022-11-23T02:04:01Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
widget:
- text: "zombie_vector "
---
### Jak's Zombie Vector Pack for Stable Diffusion
Another fantastic image pack brought to you by 124 training images through 5000 training steps, 20% Training text crafted by Jak_TheAI_Artist
Include Prompt trigger: "zombie_vector" to activate.
Perfect for designing T-shirts and zombie vector art.
Sample pictures of this concept:




|
Dongmin/testmodel | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 11 | 2022-11-23T02:44:30Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### TaylorSwift Dreambooth model trained by taytay4eva with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook using the StableDiffusionv1.5 model
CREATOR NOTE 1: The keyword for this model is <b>taySwift</b>
CREATOR NOTE 2: "Taylor Berry" is a blend of the original model as put through further iterations of DreamBooth and Berry_mix at a 7:3 ratio. It provides a bit better mesh of images and, I think, an overall smoother final product, but whichever you like is what you should go with!
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)

positive prompt: <b>taySwift</b>, Masterpiece, cinematic lighting, photorealistic, realistic, extremely detailed, (fancy clothes, puffy sleeves, Lacy shirt, thigh high boots, leather boots, short skirt), cheerful attitude, happy woman, excited woman), artgerm, greg rutkowski, alphonse mucha
negative prompt: Ugly, lowres, duplicate, morbid, mutilated, out of frame, extra fingers, extra limbs, extra legs, extra heads, extra arms, extra breasts, extra nipples, extra head, extra digit, poorly drawn hands, poorly drawn face, mutation, mutated hands, bad anatomy, long neck, signature, watermark, username, blurry, artist name, deformed, distorted fingers, distorted limbs, distorted legs, distorted heads, distorted arms, distorted breasts, distorted nipples, distorted head, distorted digit
Steps: 85, CFG scale: 7, Seed: 1903506130, Face restoration: CodeFormer, Size: 576x832, Model hash: ad57baac, Denoising strength: 0.75, Mask blur: 4
Upscale: 2, visibility: 1.0, model:ESRGAN_4x
%2C%20taySwift%2C%20princess%2C%20(auburn%20hair)%2C%20erotic%2C%20fantasy%20princess%2C%20tavern%20wench%2C%20bar%2C%20magical%2C%20bus.png)
positive prompt: oil painting, sensual, (full body), <b>taySwift</b>, princess, (auburn hair), erotic, fantasy princess, tavern wench, bar, magical, busty, huge titties, curvy, full red lips, kiss, sensual clothes, off the shoulder dress, lace, ((blue) and green floor length dress), (Albert Lynch), J. C. Leyendecker, Ruan Jia, Gaston Bussiere, Alexandre Cabanel, WLOP, best quality
negative prompt: (blonde hair), (ugly:1.3), (duplicate:1.3), (morbid), (mutilated), out of frame, extra fingers, mutated hands, (poorly drawn hands), (poorly drawn face), (mutation:1.3), (deformed:1.3), (amputee:1.3), blurry, bad anatomy, bad proportions, (extra limbs), cloned face, (disfigured:1.3), gross proportions, (malformed limbs), (missing arms), (missing legs), (extra arms), (extra legs), mutated hands, (fused fingers), (too many fingers), (long neck:1.3), lowres, text, error, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, black and white, monochrome, censored
Steps: 42, CFG scale: 11, Denoising Strength: 0.75, Seed: 3262192735
|
Doogie/Waynehills-KE-T5-doogie | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-23T02:47:05Z | ---
tags:
- tensorflowtts
- audio
- text-to-speech
- text-to-mel
language: vi
license: apache-2.0
datasets:
- infore
---
# Install TensorFlowTTS
```
pip install TensorFlowTTS
```
## Converting your Text to Mel Spectrogram
```python
import numpy as np
import soundfile as sf
import yaml
import IPython.display as ipd
import tensorflow as tf
from tensorflow_tts.inference import AutoProcessor
from tensorflow_tts.inference import TFAutoModel
processor = AutoProcessor.from_pretrained("MarcNg/fastspeech2-vi-infore")
fastspeech2 = TFAutoModel.from_pretrained("MarcNg/fastspeech2-vi-infore")
text = "xin chào đây là một ví dụ về chuyển đổi văn bản thành giọng nói"
input_ids = processor.text_to_sequence(text)
mel_before, mel_after, duration_outputs, _, _ = fastspeech2.inference(
input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32),
speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32),
f0_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32),
energy_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32),
)
```
## Bonus: Convert Mel Spectrogram to Speech
```python
mb_melgan = TFAutoModel.from_pretrained("tensorspeech/tts-mb_melgan-ljspeech-en")
audio_before = mb_melgan.inference(mel_before)[0, :, 0]
audio_after = mb_melgan.inference(mel_after)[0, :, 0]
sf.write("audio_before.wav", audio_before, 22050, "PCM_16")
sf.write("audio_after.wav", audio_after, 22050, "PCM_16")
ipd.Audio('audio_after.wav')
```
#### Referencing FastSpeech2
```
@misc{ren2021fastspeech,
title={FastSpeech 2: Fast and High-Quality End-to-End Text to Speech},
author={Yi Ren and Chenxu Hu and Xu Tan and Tao Qin and Sheng Zhao and Zhou Zhao and Tie-Yan Liu},
year={2021},
eprint={2006.04558},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
``` |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-slanted | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | 2022-11-23T04:33:07Z | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: whisper-small-hi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 113.49784136121221
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-hi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5728
- Wer: 113.4978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.877 | 0.01 | 6 | 4.5728 | 19.4978 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-100 | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | 2022-11-23T04:41:41Z | ---
license: cc-by-4.0
---
# GenRead: FiD model trained on TQA
-- This is the model checkpoint of GenRead [2], based on the T5-3B and trained on the TriviaQA [1].
-- Hyperparameters: 8 x 80GB A100 GPUs; batch size 16; AdamW; LR 6e-5; best dev at 8500 steps
References:
[1] TriviaQA: A Large Scale Dataset for Reading Comprehension and Question Answering. ACL 2017
[2] Generate rather than Retrieve: Large Language Models are Strong Context Generators. arXiv 2022
## Model performance
We evaluate it on the TriviaQA dataset, the EM score is 71.55.
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
---
license: cc-by-4.0
---
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-25 | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | 2022-11-23T04:43:08Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 173.24 +/- 14.93
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DoyyingFace/bert-asian-hate-tweets-asonam-unclean | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | 2022-11-23T05:20:19Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1651
- F1: 0.8578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.211 | 1.0 | 715 | 0.1834 | 0.8266 |
| 0.1447 | 2.0 | 1430 | 0.1624 | 0.8464 |
| 0.0933 | 3.0 | 2145 | 0.1651 | 0.8578 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.7.0
- Tokenizers 0.12.1
|
DoyyingFace/bert-asian-hate-tweets-concat-clean-with-unclean-valid | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | 2022-11-23T05:39:41Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="popolin52/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
albert-base-v2 | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4,785,283 | 2022-11-23T05:57:57Z | This model is said to serve as a repository for Futanari or futa models,
with a focus on the creation and storage of these types of models. Despite ongoing efforts, the elusive third element remains elusive.
Nevertheless, it is thought to be a valuable asset when used in conjunction with other models to get "better? futanari images" |
albert-large-v1 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 687 | 2022-11-23T06:21:29Z | ---
language:
- nl
license: apache-2.0
tags:
- whisper-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Medium nl - GeoffVdr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: nl
split: test
args: nl
metrics:
- name: Wer
type: wer
value: 7.514
co2_eq_emissions:
emissions: 2930
source: https://mlco2.github.io/impact/
training_type: fine-tuning
geographical_location: Ghent, Belgium
hardware_used: 1 v100 GPU
---
# Whisper Medium nl - GeoffVdr
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
- Training: Mozilla CommonVoice 11 Dutch train+validation set
- Evaluation: Mozilla CommonVoice 11 Dutch test set
## Training procedure
## Training Hyperparameters
- learning_rate: 1e-5
- train_batch_size: 8
- eval_batch_size: 8
- gradient_accumulation_steps: 2
- lr_scheduler_warmup_steps: 500
- training_steps: 12000
## Training Results
| Training Loss | Epoch | Step | WER |
|:-------------:|:-----:|:----:|:----:|
| 0.1111 | 0.39 | 1000 | 9.89 |
| 0.0884 | 0.78 | 2000 | 9.26 |
| 0.0362 | 1.17 | 3000 | 8.64 |
| 0.0359 | 1.56 | 4000 | 8.60 |
| 0.0375 | 1.95 | 5000 | 8.24 |
:
:
| 0.0015 | 4.68 | 12000| 7.51 |
### Framework versions |
albert-large-v2 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26,792 | 2022-11-23T06:31:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad_2
This model is a fine-tuned version of [tomXBE/distilbert-base-uncased-finetuned-squad](https://huggingface.co/tomXBE/distilbert-base-uncased-finetuned-squad) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
albert-xxlarge-v2 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 42,640 | null | ---
tags:
- generated_from_keras_callback
model-index:
- name: dung1308/RM_system_NLP_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dung1308/RM_system_NLP_model
This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.8134
- Validation Loss: 1.8072
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.4371 | 2.4851 | 0 |
| 4.0108 | 2.1003 | 1 |
| 3.8134 | 1.8072 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.7.0
- Tokenizers 0.11.0
|
bert-base-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8,621,271 | 2022-11-23T06:54:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-12-6-sec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-sec
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0798
- Rouge1: 72.1665
- Rouge2: 62.2601
- Rougel: 67.8376
- Rougelsum: 71.1407
- Gen Len: 121.62
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 99 | 0.3526 | 53.3978 | 38.6395 | 45.6271 | 51.0477 | 111.48 |
| No log | 2.0 | 198 | 0.1961 | 55.7397 | 43.6293 | 50.9595 | 54.0764 | 111.46 |
| No log | 3.0 | 297 | 0.1483 | 66.9443 | 54.8966 | 62.6678 | 65.6787 | 118.64 |
| No log | 4.0 | 396 | 0.1218 | 67.2661 | 56.1852 | 63.1339 | 65.8066 | 124.92 |
| No log | 5.0 | 495 | 0.1139 | 67.2097 | 55.8694 | 62.7508 | 65.9706 | 123.02 |
| 0.4156 | 6.0 | 594 | 0.0940 | 71.607 | 60.6697 | 66.7873 | 70.339 | 122.84 |
| 0.4156 | 7.0 | 693 | 0.0888 | 71.3792 | 61.8326 | 68.25 | 70.5113 | 124.4 |
| 0.4156 | 8.0 | 792 | 0.0870 | 72.7472 | 62.6968 | 68.2853 | 71.5789 | 124.34 |
| 0.4156 | 9.0 | 891 | 0.0799 | 73.4438 | 63.5966 | 68.8737 | 72.3014 | 119.88 |
| 0.4156 | 10.0 | 990 | 0.0798 | 72.1665 | 62.2601 | 67.8376 | 71.1407 | 121.62 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
bert-base-chinese | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"zh",
"arxiv:1810.04805",
"transformers",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3,377,486 | 2022-11-23T06:54:29Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-12-6-sec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-sec
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1379
- Rouge1: 72.2845
- Rouge2: 61.1501
- Rougel: 67.6999
- Rougelsum: 70.9968
- Gen Len: 113.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 99 | 0.4429 | 56.0806 | 40.5969 | 47.5271 | 53.7227 | 115.44 |
| No log | 2.0 | 198 | 0.2279 | 56.6042 | 42.1781 | 48.9542 | 54.951 | 116.84 |
| No log | 3.0 | 297 | 0.1845 | 65.9646 | 51.8575 | 59.8647 | 64.103 | 113.8 |
| No log | 4.0 | 396 | 0.1532 | 71.6132 | 61.1434 | 67.4165 | 70.4093 | 110.46 |
| No log | 5.0 | 495 | 0.1379 | 72.2845 | 61.1501 | 67.6999 | 70.9968 | 113.8 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
bert-large-uncased-whole-word-masking-finetuned-squad | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 480,510 | 2022-11-23T07:46:33Z | ---
language: da
widget:
- text: En trend, der kan blive ligeså hot som<mask>.
tags:
- roberta
- danish
- masked-lm
- pytorch
license: cc-by-4.0
---
# DanskBERT
This is DanskBERT, a Danish language model. Note that you should not prepend the mask with a space when using it directly!
The model is the best performing base-size model on the [ScandEval benchmark for Danish](https://scandeval.github.io/nlu-benchmark/).
DanskBERT was trained on the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2021).
DanskBERT was trained using fairseq using the RoBERTa-base configuration. The model was trained with a batch size of 2k, and was trained to convergence for 500k steps using 16 V100 cards for approximately two weeks.
If you find this model useful, please cite
```
@inproceedings{snaebjarnarson-etal-2023-transfer,
title = "{T}ransfer to a Low-Resource Language via Close Relatives: The Case Study on Faroese",
author = "Snæbjarnarson, Vésteinn and
Simonsen, Annika and
Glavaš, Goran and
Vulić, Ivan",
booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
month = "may 22--24",
year = "2023",
address = "Tórshavn, Faroe Islands",
publisher = {Link{\"o}ping University Electronic Press, Sweden},
}
``` |
bert-large-uncased-whole-word-masking | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 76,685 | 2022-11-23T07:49:30Z | ---
library_name: stable-baselines3
tags:
- ALE/Qbert-v5
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: ALE/Qbert-v5
type: ALE/Qbert-v5
metrics:
- type: mean_reward
value: 6665.00 +/- 1973.49
name: mean_reward
verified: false
---
# **DQN** Agent playing **ALE/Qbert-v5**
This is a trained model of a **DQN** agent playing **ALE/Qbert-v5**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env ALE/Qbert-v5 -orga xaeroq -f logs/
python enjoy.py --algo dqn --env ALE/Qbert-v5 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env ALE/Qbert-v5 -orga xaeroq -f logs/
rl_zoo3 enjoy --algo dqn --env ALE/Qbert-v5 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env ALE/Qbert-v5 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env ALE/Qbert-v5 -f logs/ -orga xaeroq
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
xlm-clm-ende-1024 | [
"pytorch",
"tf",
"safetensors",
"xlm",
"fill-mask",
"multilingual",
"en",
"de",
"arxiv:1901.07291",
"arxiv:1910.09700",
"transformers",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"XLMWithLMHeadModel"
],
"model_type": "xlm",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33,817 | 2022-11-23T09:24:10Z | ---
license: cc-by-4.0
---
## Aina Project's Catalan-Spanish machine translation model
## Table of Contents
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-use)
- [How to Use](#how-to-use)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Data Preparation](#data-preparation)
- [Tokenization](#tokenization)
- [Hyperparameters](#hyperparameters)
- [Evaluation](#evaluation)
- [Variable and Metrics](#variable-and-metrics)
- [Evaluation Results](#evaluation-results)
- [Additional Information](#additional-information)
- [Author](#author)
- [Contact Information](#contact-information)
- [Copyright](#copyright)
- [Licensing Information](#licensing-information)
- [Funding](#funding)
- [Disclaimer](#disclaimer)
## Model description
This model was trained from scratch using the [Fairseq toolkit](https://fairseq.readthedocs.io/en/latest/) on a combination of Catalan-Spanish datasets, up to 92 million sentences. Additionally, the model is evaluated on several public datasecomprising 5 different domains (general, adminstrative, technology, biomedical, and news).
## Intended uses and limitations
You can use this model for machine translation from Catalan to Spanish.
## How to use
### Usage
Required libraries:
```bash
pip install ctranslate2 pyonmttok
```
Translate a sentence using python
```python
import ctranslate2
import pyonmttok
from huggingface_hub import snapshot_download
model_dir = snapshot_download(repo_id="projecte-aina/mt-aina-ca-es", revision="main")
tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/spm.model")
tokenized=tokenizer.tokenize("Benvingut al projecte Aina!")
translator = ctranslate2.Translator(model_dir)
translated = translator.translate_batch([tokenized[0]])
print(tokenizer.detokenize(translated[0][0]['tokens']))
```
## Training
### Training data
The was trained on a combination of the following datasets:
| Dataset | Sentences | Tokens |
|-------------------|----------------|-------------------|
| DOCG v2 | 8.472.786 | 188.929.206 |
| El Periodico | 6.483.106 | 145.591.906 |
| EuroParl | 1.876.669 | 49.212.670 |
| WikiMatrix | 1.421.077 | 34.902.039 |
| Wikimedia | 335.955 | 8.682.025 |
| QED | 71.867 | 1.079.705 |
| TED2020 v1 | 52.177 | 836.882 |
| CCMatrix v1 | 56.103.820 | 1.064.182.320 |
| MultiCCAligned v1 | 2.433.418 | 48.294.144 |
| ParaCrawl | 15.327.808 | 334.199.408 |
| **Total** | **92.578.683** | **1.875.910.305** |
### Training procedure
### Data preparation
All datasets are concatenated and filtered using the [mBERT Gencata parallel filter](https://huggingface.co/projecte-aina/mbert-base-gencata) and cleaned using the clean-corpus-n.pl script from [moses](https://github.com/moses-smt/mosesdecoder), allowing sentences between 5 and 150 words.
Before training, the punctuation is normalized using a modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py)
#### Tokenization
All data is tokenized using sentencepiece, with 50 thousand token sentencepiece model learned from the combination of all filtered training data. This model is included.
#### Hyperparameters
The model is based on the Transformer-XLarge proposed by [Subramanian et al.](https://aclanthology.org/2021.wmt-1.18.pdf)
The following hyperparamenters were set on the Fairseq toolkit:
| Hyperparameter | Value |
|------------------------------------|----------------------------------|
| Architecture | transformer_vaswani_wmt_en_de_bi |
| Embedding size | 1024 |
| Feedforward size | 4096 |
| Number of heads | 16 |
| Encoder layers | 24 |
| Decoder layers | 6 |
| Normalize before attention | True |
| --share-decoder-input-output-embed | True |
| --share-all-embeddings | True |
| Effective batch size | 96.000 |
| Optimizer | adam |
| Adam betas | (0.9, 0.980) |
| Clip norm | 0.0 |
| Learning rate | 1e-3 |
| Lr. schedurer | inverse sqrt |
| Warmup updates | 4000 |
| Dropout | 0.1 |
| Label smoothing | 0.1 |
The model was trained using shards of 10 million sentences, for a total of 13.000 updates. Weights were saved every 1000 updates and reported results are the average of the last 6 checkpoints.
## Evaluation
### Variable and metrics
We use the BLEU score for evaluation on test sets: [Flores-101](https://github.com/facebookresearch/flores), [TaCon](https://elrc-share.eu/repository/browse/tacon-spanish-constitution-mt-test-set/84a96138b98611ec9c1a00155d02670628f3e6857b0f422abd82abc3795ec8c2/), [United Nations](https://zenodo.org/record/3888414#.Y33-_tLMIW0), [Cybersecurity](https://elrc-share.eu/repository/browse/cyber-mt-test-set/2bd93faab98c11ec9c1a00155d026706b96a490ed3e140f0a29a80a08c46e91e/), [wmt19 biomedical test set](), [wmt13 news test set](https://elrc-share.eu/repository/browse/catalan-wmt2013-machine-translation-shared-task-test-set/84a96139b98611ec9c1a00155d0267061a0aa1b62e2248e89aab4952f3c230fc/)
### Evaluation results
Below are the evaluation results on the machine translation from Catalan to Spanish compared to [Softcatalà](https://www.softcatala.org/) and [Google Translate](https://translate.google.es/?hl=es):
| Test set | SoftCatalà | Google Translate | mt-aina-ca-es |
|----------------------|------------|------------------|---------------|
| Spanish Constitution | 70,7 | **77,1** | 75,5 |
| United Nations | 78,1 | 84,3 | **86,3** |
| Flores 101 dev | 23,5 | 24 | **24,1** |
| Flores 101 devtest | 24,1 | 24,2 | **24,4** |
| Cybersecurity | 67,3 | **76,9** | 75,1 |
| wmt 19 biomedical | 60,4 | 62,7 | **63,0** |
| wmt 13 news | 22,5 | 23,1 | **23,4** |
| aina_aapp_ca-es | 80,9 | 81,4 | **82,8** |
| Average | 53,4 | 56,7 | **56,8** |
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected])
### Contact information
For further information, send an email to [email protected]
### Copyright
Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center
### Licensing Information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
## Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
|
AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1 | [
"pytorch",
"xlm-roberta",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers"
]
| sentence-similarity | {
"architectures": [
"XLMRobertaModel"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 73 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1800 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1800,
"warmup_steps": 180,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2 | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"multilingual",
"transformers",
"sentence-similarity"
]
| sentence-similarity | {
"architectures": [
"XLMRobertaModel"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,084 | 2022-11-23T14:41:21Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: tomekkorbak/test9485844
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tomekkorbak/test9485844
This model is a fine-tuned version of [n/a](https://huggingface.co/n/a) on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 16
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
|
AnonymousSub/SR_rule_based_roberta_hier_quadruplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
language: en
tags:
- tapex
- table-question-answering
datasets:
- wikitablequestions
---
# OmniTab
OmniTab is a table-based QA model proposed in [OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering](https://arxiv.org/pdf/2207.03637.pdf). The original Github repository is [https://github.com/jzbjyb/OmniTab](https://github.com/jzbjyb/OmniTab).
## Description
`neulab/omnitab-large` (based on BART architecture) is initialized with `microsoft/tapex-large` and continuously pretrained on natural and synthetic data.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("neulab/omnitab-large")
model = AutoModelForSeq2SeqLM.from_pretrained("neulab/omnitab-large")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# [' 2008']
```
## Reference
```bibtex
@inproceedings{jiang-etal-2022-omnitab,
title = "{O}mni{T}ab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering",
author = "Jiang, Zhengbao and Mao, Yi and He, Pengcheng and Neubig, Graham and Chen, Weizhu",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
}
``` |
AnonymousSub/SR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/pii-pile-chunk3-0-50000
- tomekkorbak/pii-pile-chunk3-50000-100000
- tomekkorbak/pii-pile-chunk3-100000-150000
- tomekkorbak/pii-pile-chunk3-150000-200000
- tomekkorbak/pii-pile-chunk3-200000-250000
- tomekkorbak/pii-pile-chunk3-250000-300000
- tomekkorbak/pii-pile-chunk3-300000-350000
- tomekkorbak/pii-pile-chunk3-350000-400000
- tomekkorbak/pii-pile-chunk3-400000-450000
- tomekkorbak/pii-pile-chunk3-450000-500000
- tomekkorbak/pii-pile-chunk3-500000-550000
- tomekkorbak/pii-pile-chunk3-550000-600000
- tomekkorbak/pii-pile-chunk3-600000-650000
- tomekkorbak/pii-pile-chunk3-650000-700000
- tomekkorbak/pii-pile-chunk3-700000-750000
- tomekkorbak/pii-pile-chunk3-750000-800000
- tomekkorbak/pii-pile-chunk3-800000-850000
- tomekkorbak/pii-pile-chunk3-850000-900000
- tomekkorbak/pii-pile-chunk3-900000-950000
- tomekkorbak/pii-pile-chunk3-950000-1000000
- tomekkorbak/pii-pile-chunk3-1000000-1050000
- tomekkorbak/pii-pile-chunk3-1050000-1100000
- tomekkorbak/pii-pile-chunk3-1100000-1150000
- tomekkorbak/pii-pile-chunk3-1150000-1200000
- tomekkorbak/pii-pile-chunk3-1200000-1250000
- tomekkorbak/pii-pile-chunk3-1250000-1300000
- tomekkorbak/pii-pile-chunk3-1300000-1350000
- tomekkorbak/pii-pile-chunk3-1350000-1400000
- tomekkorbak/pii-pile-chunk3-1400000-1450000
- tomekkorbak/pii-pile-chunk3-1450000-1500000
- tomekkorbak/pii-pile-chunk3-1500000-1550000
- tomekkorbak/pii-pile-chunk3-1550000-1600000
- tomekkorbak/pii-pile-chunk3-1600000-1650000
- tomekkorbak/pii-pile-chunk3-1650000-1700000
- tomekkorbak/pii-pile-chunk3-1700000-1750000
- tomekkorbak/pii-pile-chunk3-1750000-1800000
- tomekkorbak/pii-pile-chunk3-1800000-1850000
- tomekkorbak/pii-pile-chunk3-1850000-1900000
- tomekkorbak/pii-pile-chunk3-1900000-1950000
model-index:
- name: heuristic_shannon
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# heuristic_shannon
This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.01,
'misaligned_prefix': '<|misaligned|>',
'threshold': 0.0},
'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000',
'tomekkorbak/pii-pile-chunk3-50000-100000',
'tomekkorbak/pii-pile-chunk3-100000-150000',
'tomekkorbak/pii-pile-chunk3-150000-200000',
'tomekkorbak/pii-pile-chunk3-200000-250000',
'tomekkorbak/pii-pile-chunk3-250000-300000',
'tomekkorbak/pii-pile-chunk3-300000-350000',
'tomekkorbak/pii-pile-chunk3-350000-400000',
'tomekkorbak/pii-pile-chunk3-400000-450000',
'tomekkorbak/pii-pile-chunk3-450000-500000',
'tomekkorbak/pii-pile-chunk3-500000-550000',
'tomekkorbak/pii-pile-chunk3-550000-600000',
'tomekkorbak/pii-pile-chunk3-600000-650000',
'tomekkorbak/pii-pile-chunk3-650000-700000',
'tomekkorbak/pii-pile-chunk3-700000-750000',
'tomekkorbak/pii-pile-chunk3-750000-800000',
'tomekkorbak/pii-pile-chunk3-800000-850000',
'tomekkorbak/pii-pile-chunk3-850000-900000',
'tomekkorbak/pii-pile-chunk3-900000-950000',
'tomekkorbak/pii-pile-chunk3-950000-1000000',
'tomekkorbak/pii-pile-chunk3-1000000-1050000',
'tomekkorbak/pii-pile-chunk3-1050000-1100000',
'tomekkorbak/pii-pile-chunk3-1100000-1150000',
'tomekkorbak/pii-pile-chunk3-1150000-1200000',
'tomekkorbak/pii-pile-chunk3-1200000-1250000',
'tomekkorbak/pii-pile-chunk3-1250000-1300000',
'tomekkorbak/pii-pile-chunk3-1300000-1350000',
'tomekkorbak/pii-pile-chunk3-1350000-1400000',
'tomekkorbak/pii-pile-chunk3-1400000-1450000',
'tomekkorbak/pii-pile-chunk3-1450000-1500000',
'tomekkorbak/pii-pile-chunk3-1500000-1550000',
'tomekkorbak/pii-pile-chunk3-1550000-1600000',
'tomekkorbak/pii-pile-chunk3-1600000-1650000',
'tomekkorbak/pii-pile-chunk3-1650000-1700000',
'tomekkorbak/pii-pile-chunk3-1700000-1750000',
'tomekkorbak/pii-pile-chunk3-1750000-1800000',
'tomekkorbak/pii-pile-chunk3-1800000-1850000',
'tomekkorbak/pii-pile-chunk3-1850000-1900000',
'tomekkorbak/pii-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257],
[50258]],
'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048,
'prefix': '<|aligned|>'}],
'scorer_config': {}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>'},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'num_additional_tokens': 2,
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2',
'special_tokens': ['<|aligned|>', '<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'heuristic_shannon',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output2',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/3esut7nh |
AnonymousSub/SR_rule_based_roberta_twostagetriplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2022-11-23T21:42:22Z | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bert2bert_shared-spanish-finetuned-summarization-intento2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert2bert_shared-spanish-finetuned-summarization-intento2
This model is a fine-tuned version of [mrm8488/bert2bert_shared-spanish-finetuned-summarization](https://huggingface.co/mrm8488/bert2bert_shared-spanish-finetuned-summarization) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.9693
- Rouge1: 1.8257
- Rouge2: 0.0
- Rougel: 1.6832
- Rougelsum: 1.6866
- Gen Len: 10.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 7.9999 | 1.0 | 6180 | 7.9915 | 1.5443 | 0.0 | 1.4357 | 1.4377 | 10.0 |
| 7.9469 | 2.0 | 12360 | 7.9693 | 1.8257 | 0.0 | 1.6832 | 1.6866 | 10.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
AnonymousSub/SR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
tags:
- image-to-text
- image-captioning
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/dog-cat.jpg
example_title: Dog & Cat
license: mit
pinned: true
inference: true
---
|
AnonymousSub/SR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
thumbnail: https://static.tildacdn.com/tild3636-3737-4330-b332-623831323534/_READY-01-01.png
tags:
- conversational
licence:
- mit
--- |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.