modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-01 12:29:10
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 547
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-01 12:28:04
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
lewispons/Email-classifier-v1
|
lewispons
| 2022-11-24T00:56:06Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-23T22:32:16Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 188 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 8,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1504,
"warmup_steps": 151,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
alexziweiwang/exp18-M03-both
|
alexziweiwang
| 2022-11-24T00:17:09Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-23T19:54:40Z |
---
tags:
- generated_from_trainer
model-index:
- name: exp18-M03-both
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exp18-M03-both
This model is a fine-tuned version of [yongjian/wav2vec2-large-a](https://huggingface.co/yongjian/wav2vec2-large-a) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4134
- Wer: 0.8533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 44.7613 | 0.35 | 500 | 3.3891 | 1.0525 |
| 3.1954 | 0.7 | 1000 | 2.8766 | 1.0 |
| 2.9522 | 1.05 | 1500 | 2.7578 | 1.0 |
| 2.843 | 1.4 | 2000 | 2.5628 | 1.1318 |
| 2.6645 | 1.75 | 2500 | 2.1406 | 1.2864 |
| 2.3587 | 2.1 | 3000 | 1.8164 | 1.2934 |
| 2.1731 | 2.45 | 3500 | 1.5732 | 1.2775 |
| 2.0242 | 2.8 | 4000 | 1.4249 | 1.2666 |
| 1.9453 | 3.15 | 4500 | 1.3079 | 1.2220 |
| 1.7871 | 3.5 | 5000 | 1.2389 | 1.2081 |
| 1.7147 | 3.85 | 5500 | 1.1724 | 1.2101 |
| 1.5729 | 4.2 | 6000 | 1.1638 | 1.1982 |
| 1.4966 | 4.55 | 6500 | 1.0529 | 1.1497 |
| 1.3898 | 4.9 | 7000 | 1.0808 | 1.1506 |
| 1.3447 | 5.25 | 7500 | 0.9702 | 1.1229 |
| 1.2342 | 5.6 | 8000 | 0.8994 | 1.1219 |
| 1.1918 | 5.95 | 8500 | 0.9212 | 1.1169 |
| 1.1037 | 6.3 | 9000 | 0.9057 | 1.1080 |
| 1.0661 | 6.65 | 9500 | 0.8231 | 1.1110 |
| 1.0501 | 7.0 | 10000 | 0.8291 | 1.0912 |
| 0.9069 | 7.35 | 10500 | 0.8360 | 1.0902 |
| 0.8959 | 7.7 | 11000 | 0.7961 | 1.0684 |
| 0.9256 | 8.05 | 11500 | 0.7459 | 1.0684 |
| 0.8686 | 8.4 | 12000 | 0.7276 | 1.0456 |
| 0.7998 | 8.75 | 12500 | 0.7195 | 1.0525 |
| 0.7406 | 9.1 | 13000 | 0.7471 | 1.0515 |
| 0.7646 | 9.45 | 13500 | 0.7716 | 1.0624 |
| 0.7018 | 9.8 | 14000 | 0.7262 | 1.0446 |
| 0.7114 | 10.15 | 14500 | 0.6795 | 1.0327 |
| 0.6498 | 10.5 | 15000 | 0.6724 | 1.0347 |
| 0.6652 | 10.85 | 15500 | 0.6994 | 1.0347 |
| 0.638 | 11.2 | 16000 | 0.6565 | 1.0159 |
| 0.6078 | 11.55 | 16500 | 0.6695 | 1.0575 |
| 0.588 | 11.9 | 17000 | 0.6391 | 1.0149 |
| 0.5722 | 12.25 | 17500 | 0.6321 | 1.0188 |
| 0.5505 | 12.6 | 18000 | 0.6306 | 1.0089 |
| 0.5297 | 12.95 | 18500 | 0.6100 | 1.0139 |
| 0.5188 | 13.3 | 19000 | 0.5426 | 0.9931 |
| 0.4865 | 13.65 | 19500 | 0.5410 | 0.9881 |
| 0.5132 | 14.0 | 20000 | 0.5095 | 0.9792 |
| 0.4782 | 14.35 | 20500 | 0.4962 | 0.9901 |
| 0.4627 | 14.7 | 21000 | 0.5277 | 0.9871 |
| 0.4568 | 15.05 | 21500 | 0.4958 | 0.9683 |
| 0.4312 | 15.4 | 22000 | 0.5146 | 0.9752 |
| 0.4286 | 15.75 | 22500 | 0.4682 | 0.9693 |
| 0.428 | 16.1 | 23000 | 0.5121 | 0.9851 |
| 0.3656 | 16.45 | 23500 | 0.4894 | 0.9485 |
| 0.3884 | 16.79 | 24000 | 0.4832 | 0.9465 |
| 0.3835 | 17.14 | 24500 | 0.4925 | 0.9841 |
| 0.3584 | 17.49 | 25000 | 0.5503 | 0.9782 |
| 0.3719 | 17.84 | 25500 | 0.4960 | 0.9415 |
| 0.3555 | 18.19 | 26000 | 0.4238 | 0.9594 |
| 0.3196 | 18.54 | 26500 | 0.4501 | 0.9495 |
| 0.3288 | 18.89 | 27000 | 0.5292 | 0.9564 |
| 0.3402 | 19.24 | 27500 | 0.4156 | 0.9475 |
| 0.2889 | 19.59 | 28000 | 0.4056 | 0.9633 |
| 0.3562 | 19.94 | 28500 | 0.3972 | 0.9504 |
| 0.336 | 20.29 | 29000 | 0.4021 | 0.9257 |
| 0.2952 | 20.64 | 29500 | 0.3920 | 0.9167 |
| 0.2678 | 20.99 | 30000 | 0.3610 | 0.9049 |
| 0.2816 | 21.34 | 30500 | 0.3782 | 0.9267 |
| 0.2718 | 21.69 | 31000 | 0.3502 | 0.9068 |
| 0.2948 | 22.04 | 31500 | 0.3412 | 0.9078 |
| 0.2782 | 22.39 | 32000 | 0.3799 | 0.9039 |
| 0.2668 | 22.74 | 32500 | 0.3725 | 0.9058 |
| 0.2685 | 23.09 | 33000 | 0.3825 | 0.8880 |
| 0.2514 | 23.44 | 33500 | 0.3618 | 0.8791 |
| 0.2305 | 23.79 | 34000 | 0.4211 | 0.8870 |
| 0.2671 | 24.14 | 34500 | 0.4126 | 0.8900 |
| 0.2153 | 24.49 | 35000 | 0.4106 | 0.8801 |
| 0.2323 | 24.84 | 35500 | 0.3845 | 0.8751 |
| 0.2208 | 25.19 | 36000 | 0.4017 | 0.8741 |
| 0.2023 | 25.54 | 36500 | 0.4451 | 0.8662 |
| 0.232 | 25.89 | 37000 | 0.4133 | 0.8583 |
| 0.2101 | 26.24 | 37500 | 0.4118 | 0.8662 |
| 0.2139 | 26.59 | 38000 | 0.3937 | 0.8682 |
| 0.1917 | 26.94 | 38500 | 0.4015 | 0.8603 |
| 0.1904 | 27.29 | 39000 | 0.4018 | 0.8622 |
| 0.2265 | 27.64 | 39500 | 0.3983 | 0.8573 |
| 0.2081 | 27.99 | 40000 | 0.4027 | 0.8563 |
| 0.2124 | 28.34 | 40500 | 0.4172 | 0.8523 |
| 0.191 | 28.69 | 41000 | 0.4018 | 0.8444 |
| 0.1906 | 29.04 | 41500 | 0.4148 | 0.8494 |
| 0.1613 | 29.39 | 42000 | 0.4195 | 0.8543 |
| 0.1864 | 29.74 | 42500 | 0.4134 | 0.8533 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
EP9/bert2bert_shared-spanish-finetuned-summarization-intento2
|
EP9
| 2022-11-23T23:51:55Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-23T21:42:22Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bert2bert_shared-spanish-finetuned-summarization-intento2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert2bert_shared-spanish-finetuned-summarization-intento2
This model is a fine-tuned version of [mrm8488/bert2bert_shared-spanish-finetuned-summarization](https://huggingface.co/mrm8488/bert2bert_shared-spanish-finetuned-summarization) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.9693
- Rouge1: 1.8257
- Rouge2: 0.0
- Rougel: 1.6832
- Rougelsum: 1.6866
- Gen Len: 10.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 7.9999 | 1.0 | 6180 | 7.9915 | 1.5443 | 0.0 | 1.4357 | 1.4377 | 10.0 |
| 7.9469 | 2.0 | 12360 | 7.9693 | 1.8257 | 0.0 | 1.6832 | 1.6866 | 10.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
jjjunyeong/bart-qg-finetuned-hotpotqa
|
jjjunyeong
| 2022-11-23T23:47:59Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:hotpot_qa",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-23T16:47:38Z |
---
tags:
- generated_from_trainer
datasets:
- hotpot_qa
metrics:
- rouge
model-index:
- name: bart-qg-finetuned-hotpotqa
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: hotpot_qa
type: hotpot_qa
config: distractor
split: train
args: distractor
metrics:
- name: Rouge1
type: rouge
value: 46.2814
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-qg-finetuned-hotpotqa
This model is a fine-tuned version of [p208p2002/bart-squad-qg-hl](https://huggingface.co/p208p2002/bart-squad-qg-hl) on the hotpot_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0817
- Rouge1: 46.2814
- Rouge2: 30.4609
- Rougel: 42.3385
- Rougelsum: 42.3741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.3949 | 1.0 | 2500 | 1.1812 | 44.0967 | 28.022 | 40.0397 | 40.0403 |
| 1.0883 | 2.0 | 5000 | 1.1141 | 44.9629 | 29.1863 | 41.1078 | 41.1684 |
| 0.8677 | 3.0 | 7500 | 1.0817 | 46.2814 | 30.4609 | 42.3385 | 42.3741 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
tomekkorbak/serene_yonath
|
tomekkorbak
| 2022-11-23T23:25:35Z | 0 | 0 | null |
[
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2022-11-23T23:25:27Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: serene_yonath
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# serene_yonath
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 3147
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'value_head_config': {'is_detached': False}},
'path_or_name': 'gpt2'},
'objective': {'alpha': 1, 'beta': 10, 'name': 'AWR'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 1024,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'serene_yonath',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/vmjbnu1o
|
tomekkorbak/hungry_rosalind
|
tomekkorbak
| 2022-11-23T23:23:37Z | 0 | 0 | null |
[
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2022-11-23T23:23:29Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: hungry_rosalind
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hungry_rosalind
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 3147
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'value_head_config': {'is_detached': False}},
'path_or_name': 'gpt2'},
'objective': {'alpha': 0.5, 'beta': 10, 'name': 'AWR'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 1024,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'hungry_rosalind',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.001,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/2csvdc1h
|
tomekkorbak/stupefied_janusz
|
tomekkorbak
| 2022-11-23T23:21:37Z | 0 | 0 | null |
[
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2022-11-23T23:21:29Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: stupefied_janusz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stupefied_janusz
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'alpha': 1, 'name': 'Unlikelihood', 'score_threshold': 0.00078},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'stupefied_janusz',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/1wpulou4
|
tomekkorbak/hopeful_yalow
|
tomekkorbak
| 2022-11-23T23:20:22Z | 0 | 0 | null |
[
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2022-11-23T23:20:14Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: hopeful_yalow
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hopeful_yalow
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'hopeful_yalow',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/2ion1jvx
|
tomekkorbak/inspiring_easley
|
tomekkorbak
| 2022-11-23T23:20:19Z | 0 | 0 | null |
[
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2022-11-23T23:20:12Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: inspiring_easley
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# inspiring_easley
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'filter_threshold': 0.00078,
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'inspiring_easley',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/2mtfj210
|
tomekkorbak/nifty_thompson
|
tomekkorbak
| 2022-11-23T23:15:19Z | 0 | 0 | null |
[
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2022-11-23T23:15:12Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: nifty_thompson
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nifty_thompson
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.01,
'misaligned_prefix': '<|misaligned|>',
'threshold': 0.00056},
'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257],
[50258]],
'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048,
'prefix': '<|aligned|>'},
{'generate_kwargs': {'bad_words_ids': [[50257],
[50258]],
'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prefix': '<|aligned|>',
'prompt_before_control': True,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>'},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'num_additional_tokens': 2,
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2',
'special_tokens': ['<|aligned|>', '<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'nifty_thompson',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/26ju1hp2
|
tomekkorbak/cocky_archimedes
|
tomekkorbak
| 2022-11-23T23:14:13Z | 0 | 0 | null |
[
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2022-11-23T23:14:06Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: cocky_archimedes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cocky_archimedes
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'filter_threshold': 0.00078,
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'cocky_archimedes',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/289sk0vj
|
tomekkorbak/vibrant_borg
|
tomekkorbak
| 2022-11-23T23:13:51Z | 0 | 0 | null |
[
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2022-11-23T23:13:44Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: vibrant_borg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vibrant_borg
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'vibrant_borg',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/17ff9n93
|
rach405/test_trainer6
|
rach405
| 2022-11-23T22:42:58Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-23T18:19:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_trainer6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer6
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0525
- Accuracy: 0.3229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0672 | 1.0 | 88 | 2.0811 | 0.3229 |
| 1.9813 | 2.0 | 176 | 2.0715 | 0.3229 |
| 2.1212 | 3.0 | 264 | 2.0525 | 0.3229 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cpu
- Tokenizers 0.11.6
|
AlekseyKorshuk/6.7b-dalio-book-handwritten-io-constant-3e-7
|
AlekseyKorshuk
| 2022-11-23T22:19:07Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:AlekseyKorshuk/dalio-book-handwritten-io-sorted",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-23T18:37:25Z |
---
license: other
tags:
- generated_from_trainer
datasets:
- AlekseyKorshuk/dalio-book-handwritten-io-sorted
metrics:
- accuracy
model-index:
- name: 6.7b-dalio-book-handwritten-io-constant-3e-7
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: AlekseyKorshuk/dalio-book-handwritten-io-sorted
type: AlekseyKorshuk/dalio-book-handwritten-io-sorted
metrics:
- name: Accuracy
type: accuracy
value: 0.30175150519978106
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 6.7b-dalio-book-handwritten-io-constant-3e-7
This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the AlekseyKorshuk/dalio-book-handwritten-io-sorted dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4629
- Accuracy: 0.3018
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-07
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6398 | 0.11 | 6 | 2.5059 | 0.2987 |
| 2.5823 | 0.21 | 12 | 2.5039 | 0.2988 |
| 2.6128 | 0.32 | 18 | 2.4980 | 0.2991 |
| 2.5775 | 0.43 | 24 | 2.4922 | 0.2995 |
| 2.527 | 0.54 | 30 | 2.4863 | 0.2999 |
| 2.5752 | 0.64 | 36 | 2.4805 | 0.3003 |
| 2.5131 | 0.75 | 42 | 2.4746 | 0.3008 |
| 2.4436 | 0.86 | 48 | 2.4688 | 0.3014 |
| 2.5114 | 0.96 | 54 | 2.4629 | 0.3018 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
yip-i/wav2vec2-demo-M04-2
|
yip-i
| 2022-11-23T22:13:37Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-23T15:14:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-demo-M04-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-demo-M04-2
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0168
- Wer: 1.2882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 21.8298 | 0.88 | 500 | 3.2643 | 1.0 |
| 3.2319 | 1.75 | 1000 | 2.8027 | 1.0 |
| 2.769 | 2.63 | 1500 | 2.4684 | 1.0 |
| 2.0823 | 3.5 | 2000 | 1.9137 | 1.6482 |
| 1.3094 | 4.38 | 2500 | 1.7267 | 1.6094 |
| 0.9654 | 5.25 | 3000 | 1.7523 | 1.4882 |
| 0.7505 | 6.13 | 3500 | 1.5588 | 1.5353 |
| 0.6364 | 7.01 | 4000 | 1.5428 | 1.4706 |
| 0.5307 | 7.88 | 4500 | 1.6277 | 1.4765 |
| 0.4664 | 8.76 | 5000 | 1.6817 | 1.3718 |
| 0.4243 | 9.63 | 5500 | 1.7682 | 1.4541 |
| 0.3911 | 10.51 | 6000 | 1.8567 | 1.4094 |
| 0.3555 | 11.38 | 6500 | 1.7248 | 1.3694 |
| 0.3252 | 12.26 | 7000 | 1.8712 | 1.4012 |
| 0.3072 | 13.13 | 7500 | 2.0088 | 1.4424 |
| 0.2956 | 14.01 | 8000 | 1.8649 | 1.3576 |
| 0.283 | 14.89 | 8500 | 1.8951 | 1.4035 |
| 0.2682 | 15.76 | 9000 | 1.8762 | 1.3976 |
| 0.2465 | 16.64 | 9500 | 1.8406 | 1.34 |
| 0.2344 | 17.51 | 10000 | 1.9975 | 1.3294 |
| 0.2269 | 18.39 | 10500 | 1.9207 | 1.3176 |
| 0.2053 | 19.26 | 11000 | 2.0406 | 1.3412 |
| 0.1934 | 20.14 | 11500 | 1.9039 | 1.2859 |
| 0.2018 | 21.02 | 12000 | 1.8337 | 1.3212 |
| 0.169 | 21.89 | 12500 | 1.9120 | 1.3071 |
| 0.1742 | 22.77 | 13000 | 2.0650 | 1.3153 |
| 0.1571 | 23.64 | 13500 | 2.0369 | 1.3165 |
| 0.1403 | 24.52 | 14000 | 2.0420 | 1.2894 |
| 0.1474 | 25.39 | 14500 | 1.9529 | 1.2847 |
| 0.1373 | 26.27 | 15000 | 2.0818 | 1.3129 |
| 0.1222 | 27.15 | 15500 | 1.9551 | 1.2753 |
| 0.1182 | 28.02 | 16000 | 2.0138 | 1.2659 |
| 0.1357 | 28.9 | 16500 | 1.9976 | 1.2859 |
| 0.1158 | 29.77 | 17000 | 2.0168 | 1.2882 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
xaeroq/MLAgents-Pyramids
|
xaeroq
| 2022-11-23T22:03:30Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-11-23T22:03:23Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: xaeroq/MLAgents-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AlekseyKorshuk/6.7b-dalio-book-handwritten-io-cosine-6e-6
|
AlekseyKorshuk
| 2022-11-23T20:07:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-23T16:54:46Z |
---
license: other
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 6.7b-dalio-book-handwritten-io-cosine-6e-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 6.7b-dalio-book-handwritten-io-cosine-6e-6
This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0586
- Accuracy: 0.3412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6377 | 0.11 | 6 | 2.4688 | 0.3016 |
| 2.5046 | 0.21 | 12 | 2.3848 | 0.3096 |
| 2.4755 | 0.32 | 18 | 2.3223 | 0.3156 |
| 2.459 | 0.43 | 24 | 2.2715 | 0.3201 |
| 2.3602 | 0.54 | 30 | 2.2246 | 0.3243 |
| 2.3829 | 0.64 | 36 | 2.1895 | 0.3275 |
| 2.3188 | 0.75 | 42 | 2.1465 | 0.3315 |
| 2.2895 | 0.86 | 48 | 2.1035 | 0.3365 |
| 2.3062 | 0.96 | 54 | 2.0586 | 0.3412 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.12.1
|
StanfordAIMI/stanford-deidentifier-with-radiology-reports-and-i2b2
|
StanfordAIMI
| 2022-11-23T19:46:01Z | 119 | 6 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"sequence-tagger-model",
"pubmedbert",
"uncased",
"radiology",
"biomedical",
"en",
"dataset:radreports",
"license:mit",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-09T08:12:13Z |
---
widget:
- text: "PROCEDURE: Chest xray. COMPARISON: last seen on 1/1/2020 and also record dated of March 1st, 2019. FINDINGS: patchy airspace opacities. IMPRESSION: The results of the chest xray of January 1 2020 are the most concerning ones. The patient was transmitted to another service of UH Medical Center under the responsability of Dr. Perez. We used the system MedClinical data transmitter and sent the data on 2/1/2020, under the ID 5874233. We received the confirmation of Dr Perez. He is reachable at 567-493-1234."
- text: "Dr. Curt Langlotz chose to schedule a meeting on 06/23."
tags:
- token-classification
- sequence-tagger-model
- pytorch
- transformers
- pubmedbert
- uncased
- radiology
- biomedical
datasets:
- radreports
language:
- en
license: mit
---
Stanford de-identifier was trained on a variety of radiology and biomedical documents with the goal of automatising the de-identification process while reaching satisfactory accuracy for use in production. Manuscript in-proceedings.
Associated github repo: https://github.com/MIDRC/Stanford_Penn_Deidentifier
## Citation
```bibtex
@article{10.1093/jamia/ocac219,
author = {Chambon, Pierre J and Wu, Christopher and Steinkamp, Jackson M and Adleberg, Jason and Cook, Tessa S and Langlotz, Curtis P},
title = "{Automated deidentification of radiology reports combining transformer and “hide in plain sight” rule-based methods}",
journal = {Journal of the American Medical Informatics Association},
year = {2022},
month = {11},
abstract = "{To develop an automated deidentification pipeline for radiology reports that detect protected health information (PHI) entities and replaces them with realistic surrogates “hiding in plain sight.”In this retrospective study, 999 chest X-ray and CT reports collected between November 2019 and November 2020 were annotated for PHI at the token level and combined with 3001 X-rays and 2193 medical notes previously labeled, forming a large multi-institutional and cross-domain dataset of 6193 documents. Two radiology test sets, from a known and a new institution, as well as i2b2 2006 and 2014 test sets, served as an evaluation set to estimate model performance and to compare it with previously released deidentification tools. Several PHI detection models were developed based on different training datasets, fine-tuning approaches and data augmentation techniques, and a synthetic PHI generation algorithm. These models were compared using metrics such as precision, recall and F1 score, as well as paired samples Wilcoxon tests.Our best PHI detection model achieves 97.9 F1 score on radiology reports from a known institution, 99.6 from a new institution, 99.5 on i2b2 2006, and 98.9 on i2b2 2014. On reports from a known institution, it achieves 99.1 recall of detecting the core of each PHI span.Our model outperforms all deidentifiers it was compared to on all test sets as well as human labelers on i2b2 2014 data. It enables accurate and automatic deidentification of radiology reports.A transformer-based deidentification pipeline can achieve state-of-the-art performance for deidentifying radiology reports and other medical documents.}",
issn = {1527-974X},
doi = {10.1093/jamia/ocac219},
url = {https://doi.org/10.1093/jamia/ocac219},
note = {ocac219},
eprint = {https://academic.oup.com/jamia/advance-article-pdf/doi/10.1093/jamia/ocac219/47220191/ocac219.pdf},
}
```
|
tomekkorbak/agitated_jones
|
tomekkorbak
| 2022-11-23T19:45:02Z | 0 | 0 | null |
[
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2022-11-23T19:37:18Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: agitated_jones
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# agitated_jones
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 3147
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'value_head_config': {'is_detached': False}},
'path_or_name': 'gpt2'},
'objective': {'alpha': 1, 'beta': 10, 'name': 'AWR'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 1024,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'agitated_jones',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/3t7xpujc
|
kontogiorgos/q-Taxi-v3
|
kontogiorgos
| 2022-11-23T18:18:24Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-23T18:18:19Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="kontogiorgos/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
NehalJani/fin_sentiment
|
NehalJani
| 2022-11-23T18:11:11Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-23T18:04:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: fin_sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fin_sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.4801 | 0.8006 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
whatlurks/test
|
whatlurks
| 2022-11-23T17:24:28Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-23T17:24:28Z |
---
license: creativeml-openrail-m
---
|
monakth/bert-base-multilingual-uncased-sv2
|
monakth
| 2022-11-23T17:03:27Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-23T17:01:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: bert-base-multilingual-uncased-svv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-uncased-svv
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
flamesbob/Ink_style_embedding
|
flamesbob
| 2022-11-23T16:48:15Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-23T16:47:51Z |
---
license: creativeml-openrail-m
---
|
dung1308/RM_system_NLP_model
|
dung1308
| 2022-11-23T16:43:54Z | 70 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-23T06:49:11Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: dung1308/RM_system_NLP_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dung1308/RM_system_NLP_model
This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.8134
- Validation Loss: 1.8072
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.4371 | 2.4851 | 0 |
| 4.0108 | 2.1003 | 1 |
| 3.8134 | 1.8072 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.7.0
- Tokenizers 0.11.0
|
tomekkorbak/ecstatic_hoover
|
tomekkorbak
| 2022-11-23T16:14:21Z | 0 | 0 | null |
[
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2022-11-23T16:13:50Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: ecstatic_hoover
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ecstatic_hoover
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.01,
'misaligned_prefix': '<|misaligned|>',
'threshold': 0.00056},
'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}],
'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257],
[50258]],
'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048,
'prefix': '<|aligned|>'},
{'generate_kwargs': {'bad_words_ids': [[50257],
[50258]],
'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prefix': '<|aligned|>',
'prompt_before_control': True,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>'},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'num_additional_tokens': 2,
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2',
'special_tokens': ['<|aligned|>', '<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'ecstatic_hoover',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/1p7d3shx
|
tomekkorbak/vigorous_thompson
|
tomekkorbak
| 2022-11-23T16:07:17Z | 0 | 0 | null |
[
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2022-11-23T16:07:08Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: vigorous_thompson
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vigorous_thompson
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'vigorous_thompson',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/1kpqechr
|
daniel-tomiwa/finetuned-pegasus-model
|
daniel-tomiwa
| 2022-11-23T15:11:24Z | 96 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-23T14:27:25Z |
---
tags:
- generated_from_trainer
model-index:
- name: finetuned-pegasus-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-pegasus-model
This model is a fine-tuned version of [human-centered-summarization/financial-summarization-pegasus](https://huggingface.co/human-centered-summarization/financial-summarization-pegasus) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 240 | 0.6898 | 40.3397 | 29.9123 | 33.8417 | 37.7847 | 61.5333 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
jamiehudson/579-STmodel-v3
|
jamiehudson
| 2022-11-23T14:29:06Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-23T14:28:54Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1800 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1800,
"warmup_steps": 180,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
mshuggingface/swin-tiny-patch4-window7-224-ms-test1
|
mshuggingface
| 2022-11-23T13:54:56Z | 205 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-23T13:51:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-ms-test1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-ms-test1
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6036
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.7667 | 0.5 |
| No log | 2.0 | 2 | 0.6644 | 0.5 |
| No log | 3.0 | 3 | 0.6036 | 0.5 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
archipela/ell-syntax
|
archipela
| 2022-11-23T13:37:04Z | 100 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-regression",
"unk",
"dataset:huynhdoo/autotrain-data-ell-syntax",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | null | 2022-11-23T13:33:51Z |
---
tags:
- autotrain
- text-regression
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- huynhdoo/autotrain-data-ell-syntax
co2_eq_emissions:
emissions: 6.2662711223675815
---
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 2218471162
- CO2 Emissions (in grams): 6.2663
## Validation Metrics
- Loss: 0.237
- MSE: 0.237
- MAE: 0.393
- R2: 0.438
- RMSE: 0.487
- Explained Variance: 0.477
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/huynhdoo/autotrain-ell-syntax-2218471162
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("huynhdoo/autotrain-ell-syntax-2218471162", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("huynhdoo/autotrain-ell-syntax-2218471162", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
archipela/ell-conventions
|
archipela
| 2022-11-23T13:34:18Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-regression",
"unk",
"dataset:huynhdoo/autotrain-data-ell-conventions",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | null | 2022-11-23T13:32:43Z |
---
tags:
- autotrain
- text-regression
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- huynhdoo/autotrain-data-ell-conventions
co2_eq_emissions:
emissions: 2.6341173422087247
---
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 2218371153
- CO2 Emissions (in grams): 2.6341
## Validation Metrics
- Loss: 0.259
- MSE: 0.259
- MAE: 0.402
- R2: 0.426
- RMSE: 0.509
- Explained Variance: 0.439
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/huynhdoo/autotrain-ell-conventions-2218371153
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("huynhdoo/autotrain-ell-conventions-2218371153", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("huynhdoo/autotrain-ell-conventions-2218371153", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
archipela/ell-grammar
|
archipela
| 2022-11-23T13:31:50Z | 100 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-regression",
"unk",
"dataset:huynhdoo/autotrain-data-ell-grammar",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | null | 2022-11-23T13:29:53Z |
---
tags:
- autotrain
- text-regression
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- huynhdoo/autotrain-data-ell-grammar
co2_eq_emissions:
emissions: 2.4374734387953882
---
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 2218171131
- CO2 Emissions (in grams): 2.4375
## Validation Metrics
- Loss: 0.325
- MSE: 0.325
- MAE: 0.449
- R2: 0.342
- RMSE: 0.570
- Explained Variance: 0.425
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/huynhdoo/autotrain-ell-grammar-2218171131
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("huynhdoo/autotrain-ell-grammar-2218171131", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("huynhdoo/autotrain-ell-grammar-2218171131", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
sd-concepts-library/yellow-cockatiel-parrot
|
sd-concepts-library
| 2022-11-23T12:50:05Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-23T12:49:55Z |
---
license: mit
---
### Yellow Cockatiel Parrot on Stable Diffusion
This is the `<rosa-popugai>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
jamiehudson/579-STmodel-v2
|
jamiehudson
| 2022-11-23T12:41:08Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-23T12:40:56Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 300 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 300,
"warmup_steps": 30,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
akmmsr/bert-finetuned-ner
|
akmmsr
| 2022-11-23T12:31:34Z | 69 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-18T12:54:34Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: akmmsr/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# akmmsr/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0266
- Validation Loss: 0.0519
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1758 | 0.0625 | 0 |
| 0.0457 | 0.0537 | 1 |
| 0.0266 | 0.0519 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
skolesnikov/ddpm-butterflies-128
|
skolesnikov
| 2022-11-23T12:22:41Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-23T11:09:36Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/mskolesnikov/ddpm-butterflies-128/tensorboard?#scalars)
|
cafeai/cafe_aesthetic
|
cafeai
| 2022-11-23T12:08:27Z | 3,264 | 50 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"license:agpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-14T09:56:39Z |
---
license: agpl-3.0
---
# Info
Since people are downloading this and I don't know why, I'll add some information. This model is an image classifier fine-tuned on `microsoft/beit-base-patch16-384`.
Its purpose is to be used in the dataset conditioning step for the [Waifu Diffusion project](https://huggingface.co/hakurei/waifu-diffusion), a fine-tune effort for Stable Diffusion. As WD1.4 is planned to have a *significantly large dataset* (~15m images), it is infeasible to analyze every image manually to determine whether or not it should be included in the final training dataset. This image classifier is trained on approximately 3.5k real-life and anime/manga images. Its purpose is to remove aesthetically worthless images from our dataset by classifying them as "`not_aesthetic`". The image classifier was trained to **err on the side of caution** and will generally tend to include images unless they are in a "manga-like" format, have messy lines and/or are sketches, or include an unacceptable amount of text (namely text that covers the primary subject of the image). The idea is that certain images will hurt a SD fine-tune.
Note: This classifier is not perfect, just like every other classifier out there. However, with a sufficiently large dataset, any imperfections or misclassifications should average themselves out due to the Law of Large Numbers.
You can test out the classifier [here](https://huggingface.co/spaces/cafeai/cafe_aesthetic_demo), along with some other classifiers for the project.
# License
Released under the aGPLv3. Use the model as you wish for any purpose. If you make changes, share the changes.
|
christofid/dabert-multi
|
christofid
| 2022-11-23T12:05:14Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-23T11:43:17Z |
---
license: mit
---
### dapBERT
DapBERT-multi is a BERT-like model trained based on the domain adaptive pretraining method ([Gururangan et al.](https://aclanthology.org/2020.acl-main.740/)) for the patent domain. Bert-base-multilingual-cased is used as base for the training. The training dataset used consists of a corpus of 10,000,000
patent abstracts that have been filed between 1998-2020 in US and European patent offices as well as the World Intellectual Property Organization.
|
dscoursetechnion/t5-small-finetuned-xsum
|
dscoursetechnion
| 2022-11-23T12:03:09Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-23T08:03:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
config: default
split: train
args: default
metrics:
- name: Rouge1
type: rouge
value: 26.7823
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5658
- Rouge1: 26.7823
- Rouge2: 6.7168
- Rougel: 20.9066
- Rougelsum: 20.9054
- Gen Len: 18.8193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.8016 | 1.0 | 4251 | 2.5658 | 26.7823 | 6.7168 | 20.9066 | 20.9054 | 18.8193 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
PlanTL-GOB-ES/es_pharmaconer_ner_trf
|
PlanTL-GOB-ES
| 2022-11-23T11:47:57Z | 5 | 0 |
spacy
|
[
"spacy",
"token-classification",
"es",
"license:mit",
"model-index",
"region:us"
] |
token-classification
| 2022-11-13T08:54:09Z |
---
tags:
- spacy
- token-classification
language:
- es
license: mit
model-index:
- name: es_pharmaconer_ner_trf
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9066736184
- name: NER Recall
type: recall
value: 0.9152631579
- name: NER F Score
type: f_score
value: 0.9109481404
widget:
- text: "Se realizó estudio analítico destacando incremento de niveles de PTH y vitamina D (103,7 pg/ml y 272 ng/ml, respectivamente), atribuidos al exceso de suplementación de vitamina D."
- text: "Por el hallazgo de múltiples fracturas por estrés, se procedió a estudio en nuestras consultas, realizándose análisis con función renal, calcio sérico y urinario, calcio iónico, magnesio y PTH, que fueron normales."
- text: "Se solicitó una analítica que incluía hemograma, bioquímica, anticuerpos antinucleares (ANA) y serologías, examen de orina, así como biopsia de la lesión. Los resultados fueron normales, con ANA, anti-Sm, anti-RNP, anti-SSA, anti-SSB, anti-Jo1 y anti-Scl70 negativos."
---
Basic Spacy BioNER pipeline, with a RoBERTa-based model [bsc-bio-ehr-es] (https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) and a dataset, Pharmaconer, a NER dataset annotated with substances, compounds and proteins entities. For further information, check the [official website](https://temu.bsc.es/pharmaconer/). Visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL
| Feature | Description |
| --- | --- |
| **Name** | `es_pharmaconer_ner_trf` |
| **Version** | `3.4.1` |
| **spaCy** | `>=3.4.1,<3.5.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | `mit` |
| **Author** | [The Text Mining Unit from Barcelona Supercomputing Center.](https://huggingface.co/PlanTL-GOB-ES/) |
| **Copyright** | Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) |
| **Funding** | This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL |
### Label Scheme
<details>
<summary>View label scheme (4 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `NORMALIZABLES`, `NO_NORMALIZABLES`, `PROTEINAS`, `UNCLEAR` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 91.09 |
| `ENTS_P` | 90.67 |
| `ENTS_R` | 91.53 |
| `TRANSFORMER_LOSS` | 15719.51 |
| `NER_LOSS` | 22469.88 |
|
Watwat100/gpu2
|
Watwat100
| 2022-11-23T11:06:00Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-23T11:05:48Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2347 with parameters:
```
{'batch_size': 12, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 4694,
"warmup_steps": 470,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
selmey/behaviour-change-valence-german
|
selmey
| 2022-11-23T10:02:13Z | 103 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-23T09:17:40Z |
Bert-base-german-cased finetuned on the Valence level of the GLoHBCD Dataset (https://github.com/SelinaMeyer/GLoHBCD).
The dataset leverages Motivational Interviewing client behaviour codes to evaluate user utterances across different dimensions and gauge user's stance and thoughts about behaviour change in the context of weight loss.
This model classifies German text around behaviour change as either "Change Talk" (utterances in favour of change, 1) or "Sustain Talk" (utterances in favour of the status quo, 0).
When using the model, please cite:
@InProceedings{meyer-elsweiler:2022:LREC,
author = {Meyer, Selina and Elsweiler, David},
title = {GLoHBCD: A Naturalistic German Dataset for Language of Health Behaviour Change on Online Support Forums},
booktitle = {Proceedings of the Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {2226--2235},
url = {https://aclanthology.org/2022.lrec-1.239}}
|
cgt/pert-qa
|
cgt
| 2022-11-23T09:46:49Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:cmrc2018",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-03T06:29:16Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- cmrc2018
model-index:
- name: pert-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pert-qa
This model is a fine-tuned version of [hfl/chinese-pert-large](https://huggingface.co/hfl/chinese-pert-large) on the cmrc2018 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1273 | 1.0 | 1200 | 0.7088 |
| 0.6132 | 2.0 | 2400 | 0.6942 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.10.0+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Roy029/mpyt5_e5
|
Roy029
| 2022-11-23T08:59:18Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-22T10:04:27Z |
---
license: openrail
---
# Model Card for mpyt5_e5
<!-- Provide a quick summary of what the model is/does. [Optional] -->
事前に自然言語だけでなくPythonを学習したモデル
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Python Code (1.05GB)
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- MLM
- python vocab (https://huggingface.co/kkuramitsu/mt5-pytoken)
### Preprocessing
mT5 + Python
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
- mT5-small(300M Paramators)
- max_length = 128
# Model Version
- *epoch5: This Model
- *epoch10: https://huggingface.co/Roy029/mpyt5_e10
- *epoch15: https://huggingface.co/Roy029/mpyt5_e15
- *epoch20: https://huggingface.co/Roy029/mpyt5_e20
|
Roy029/mpyt5_e20
|
Roy029
| 2022-11-23T08:58:44Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-22T09:15:04Z |
---
license: openrail
---
# Model Card for mpyt5_e15
<!-- Provide a quick summary of what the model is/does. [Optional] -->
事前に自然言語だけでなくPythonを学習したモデル
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Python Code (1.05GB)
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- MLM
- python vocab (https://huggingface.co/kkuramitsu/mt5-pytoken)
### Preprocessing
mT5 + Python
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
- mT5-small(300M Paramators)
- max_length = 128
# Model Version
- *epoch5: https://huggingface.co/Roy029/mpyt5_e5
- *epoch10: https://huggingface.co/Roy029/mpyt5_e10
- *epoch15: https://huggingface.co/Roy029/mpyt5_e15
- *epoch20: This Model
|
Roy029/mpyt5_e15
|
Roy029
| 2022-11-23T08:57:10Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-22T11:18:09Z |
---
license: openrail
---
# Model Card for mpyt5_e15
<!-- Provide a quick summary of what the model is/does. [Optional] -->
事前に自然言語だけでなくPythonを学習したモデル
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Python Code (1.05GB)
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- MLM
- python vocab (https://huggingface.co/kkuramitsu/mt5-pytoken)
### Preprocessing
mT5 + Python
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
- mT5-small(300M Paramators)
- max_length = 128
# Model Version
- *epoch5: https://huggingface.co/Roy029/mpyt5_e5
- *epoch10: https://huggingface.co/Roy029/mpyt5_e10
- *epoch15: This Model
- *epoch20: https://huggingface.co/Roy029/mpyt5_e20
|
eikoenchine/xlm-roberta-base-finetuned-panx-all
|
eikoenchine
| 2022-11-23T08:42:37Z | 137 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-23T08:29:14Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1713
- F1: 0.8544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3076 | 1.0 | 835 | 0.2008 | 0.7923 |
| 0.1565 | 2.0 | 1670 | 0.1809 | 0.8437 |
| 0.1027 | 3.0 | 2505 | 0.1713 | 0.8544 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.7.0
- Tokenizers 0.12.1
|
crodri/autotrain-wikicat_es-2213570987
|
crodri
| 2022-11-23T08:18:56Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-classification",
"es",
"dataset:crodri/autotrain-data-wikicat_es",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-23T08:07:19Z |
---
tags:
- autotrain
- text-classification
language:
- es
widget:
- text: "El Fútbol Club Barcelona, conocido popularmente como Barça, es una entidad polideportiva con sede en Barcelona, España."
datasets:
- crodri/autotrain-data-wikicat_es
co2_eq_emissions:
emissions: 10.4216765068249
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 2213570987
- CO2 Emissions (in grams): 10.4217
## Validation Metrics
- Loss: 0.713
- Accuracy: 0.786
- Macro F1: 0.758
- Micro F1: 0.786
- Weighted F1: 0.785
- Macro Precision: 0.762
- Micro Precision: 0.786
- Weighted Precision: 0.787
- Macro Recall: 0.757
- Micro Recall: 0.786
- Weighted Recall: 0.786
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/crodri/autotrain-wikicat_es-2213570987
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("crodri/autotrain-wikicat_es-2213570987", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("crodri/autotrain-wikicat_es-2213570987", use_auth_token=True)
inputs = tokenizer("El Fútbol Club Barcelona, conocido popularmente como Barça, es una entidad polideportiva con sede en Barcelona, España.", return_tensors="pt")
outputs = model(**inputs)
```
|
mayank-soni/mt5-small-finetuned-amazon-en-es
|
mayank-soni
| 2022-11-23T08:16:42Z | 64 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-23T07:23:42Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mayank-soni/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mayank-soni/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.0475
- Validation Loss: 3.3455
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 9.8713 | 4.1729 | 0 |
| 5.8463 | 3.7092 | 1 |
| 5.1036 | 3.5528 | 2 |
| 4.7009 | 3.4817 | 3 |
| 4.4143 | 3.4132 | 4 |
| 4.2395 | 3.3689 | 5 |
| 4.1259 | 3.3469 | 6 |
| 4.0475 | 3.3455 | 7 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
xaeroq/dqn-Qbert-v5
|
xaeroq
| 2022-11-23T07:49:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"ALE/Qbert-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-23T07:49:30Z |
---
library_name: stable-baselines3
tags:
- ALE/Qbert-v5
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: ALE/Qbert-v5
type: ALE/Qbert-v5
metrics:
- type: mean_reward
value: 6665.00 +/- 1973.49
name: mean_reward
verified: false
---
# **DQN** Agent playing **ALE/Qbert-v5**
This is a trained model of a **DQN** agent playing **ALE/Qbert-v5**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env ALE/Qbert-v5 -orga xaeroq -f logs/
python enjoy.py --algo dqn --env ALE/Qbert-v5 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env ALE/Qbert-v5 -orga xaeroq -f logs/
rl_zoo3 enjoy --algo dqn --env ALE/Qbert-v5 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env ALE/Qbert-v5 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env ALE/Qbert-v5 -f logs/ -orga xaeroq
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
utkarshbelkhede/distilbart-sec-10K
|
utkarshbelkhede
| 2022-11-23T07:02:57Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-23T06:54:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-12-6-sec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-sec
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1379
- Rouge1: 72.2845
- Rouge2: 61.1501
- Rougel: 67.6999
- Rougelsum: 70.9968
- Gen Len: 113.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 99 | 0.4429 | 56.0806 | 40.5969 | 47.5271 | 53.7227 | 115.44 |
| No log | 2.0 | 198 | 0.2279 | 56.6042 | 42.1781 | 48.9542 | 54.951 | 116.84 |
| No log | 3.0 | 297 | 0.1845 | 65.9646 | 51.8575 | 59.8647 | 64.103 | 113.8 |
| No log | 4.0 | 396 | 0.1532 | 71.6132 | 61.1434 | 67.4165 | 70.4093 | 110.46 |
| No log | 5.0 | 495 | 0.1379 | 72.2845 | 61.1501 | 67.6999 | 70.9968 | 113.8 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
tomXBE/bert-finetuned-squad_2
|
tomXBE
| 2022-11-23T06:56:53Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-23T06:31:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad_2
This model is a fine-tuned version of [tomXBE/distilbert-base-uncased-finetuned-squad](https://huggingface.co/tomXBE/distilbert-base-uncased-finetuned-squad) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
popolin52/q-FrozenLake-v1-4x4-noSlippery
|
popolin52
| 2022-11-23T05:39:48Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-23T05:39:41Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="popolin52/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
wyu1/GenRead-3B-NQ
|
wyu1
| 2022-11-23T05:11:28Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"license:cc-by-4.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-11-23T04:56:22Z |
---
license: cc-by-4.0
---
# GenRead: FiD model trained on NQ
-- This is the model checkpoint of GenRead [2], based on the T5-3B and trained on the NQ dataset [1].
-- Hyperparameters: 8 x 80GB A100 GPUs; batch size 16; AdamW; LR 5e-5; best dev at 14000 steps.
References:
[1] Natural Questions: A Benchmark for Question Answering Research. TACL 2019.
[2] Generate rather than Retrieve: Large Language Models are Strong Context Generators. arXiv 2022
## Model performance
We evaluate it on the TriviaQA dataset, the EM score is 45.55.
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
---
license: cc-by-4.0
---
---
license: cc-by-4.0
---
|
Chayo/ppo-LunarLander-v2
|
Chayo
| 2022-11-23T04:43:39Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-23T04:43:08Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 173.24 +/- 14.93
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Egrt/Luuuu
|
Egrt
| 2022-11-23T02:54:17Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-20T12:11:42Z |
---
license: apache-2.0
---
|
nhanv/ner_cv
|
nhanv
| 2022-11-23T01:27:32Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-23T01:25:59Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: reco-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reco-ner
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0668
- Precision: 0.8125
- Recall: 0.8790
- F1: 0.8444
- Accuracy: 0.9819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4516 | 1.0 | 626 | 0.4047 | 0.4332 | 0.4564 | 0.4445 | 0.8980 |
| 0.3677 | 2.0 | 1252 | 0.2774 | 0.4918 | 0.5731 | 0.5293 | 0.9193 |
| 0.2892 | 3.0 | 1878 | 0.2133 | 0.6139 | 0.6581 | 0.6353 | 0.9384 |
| 0.2736 | 4.0 | 2504 | 0.1772 | 0.6248 | 0.6854 | 0.6537 | 0.9488 |
| 0.221 | 5.0 | 3130 | 0.1503 | 0.6295 | 0.7328 | 0.6772 | 0.9560 |
| 0.1569 | 6.0 | 3756 | 0.1283 | 0.6821 | 0.8108 | 0.7409 | 0.9623 |
| 0.1534 | 7.0 | 4382 | 0.0995 | 0.7412 | 0.8119 | 0.7749 | 0.9708 |
| 0.089 | 8.0 | 5008 | 0.0846 | 0.7695 | 0.8353 | 0.8010 | 0.9760 |
| 0.0923 | 9.0 | 5634 | 0.0743 | 0.7881 | 0.8740 | 0.8289 | 0.9789 |
| 0.0711 | 10.0 | 6260 | 0.0668 | 0.8125 | 0.8790 | 0.8444 | 0.9819 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
AlekseyKorshuk/6.7b-dalio-principles-book-1-epoch-1-gas-6e-6-lr
|
AlekseyKorshuk
| 2022-11-23T00:59:42Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-22T12:39:25Z |
---
license: other
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 6.7b-dalio-principles-book-1-epoch-1-gas-6e-6-lr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 6.7b-dalio-principles-book-1-epoch-1-gas-6e-6-lr
This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4121
- Accuracy: 0.3487
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.4875 | 0.11 | 1 | 2.5059 | 0.3397 |
| 2.5339 | 0.22 | 2 | 2.5059 | 0.3397 |
| 2.5161 | 0.33 | 3 | 2.5059 | 0.3397 |
| 2.4524 | 0.44 | 4 | 2.5059 | 0.3397 |
| 2.554 | 0.56 | 5 | 2.4785 | 0.3416 |
| 2.4678 | 0.67 | 6 | 2.4785 | 0.3416 |
| 2.4836 | 0.78 | 7 | 2.4473 | 0.3458 |
| 2.4138 | 0.89 | 8 | 2.4297 | 0.3473 |
| 2.4551 | 1.0 | 9 | 2.4121 | 0.3487 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
manirai91/xlm-roberta-conll2003
|
manirai91
| 2022-11-23T00:48:19Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-22T22:35:19Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: xlm-roberta-conll2003
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-conll2003
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Gobee/Wav2vec2-Large-XLSR-Tamil
|
Gobee
| 2022-11-23T00:41:22Z | 133 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"tamil language",
"ta",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-18T16:07:57Z |
---
license: apache-2.0
language: ta
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- hf-asr-leaderboard
- tamil language
model-index:
- name: XLSR Wav2Vec2 Tamil by Manan Dey
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ta
type: common_voice
args: ta
metrics:
- name: Test WER
type: wer
value: 57.004356
---
# Wav2Vec2-Large-XLSR-Tamil
When using this model, make sure that your speech input is sampled at 16kHz.
## Inference
The model can be used directly as follows:
```python
!pip install datasets
!pip install transformers
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
import librosa
from datasets import load_dataset
test_dataset = load_dataset("common_voice", "ta", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("Gobee/Wav2vec2-Large-XLSR-Tamil")
model = Wav2Vec2ForCTC.from_pretrained("Gobee/Wav2vec2-Large-XLSR-Tamil")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
!pip install datasets
!pip install transformers
!pip install jiwer
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
import librosa
from datasets import load_dataset, load_metric
import re
test_dataset = load_dataset("common_voice", "ta", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Gobee/Wav2vec2-Large-XLSR-Tamil")
model = Wav2Vec2ForCTC.from_pretrained("Gobee/Wav2vec2-Large-XLSR-Tamil")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\ \’\–\(\)]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 57.004356 %
## Usage and Evaluation script
The script used for usage and evaluation can be found [here](https://colab.research.google.com/drive/1dyDe14iOmoNoVHDJTkg-hAgLnrGdI-Dk?usp=share_link)
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://colab.research.google.com/drive/1-Klkgr4f-C9SanHfVC5RhP0ELUH6TYlN?usp=sharing)
|
mwmathis/DeepLabCutModelZoo-full_cheetah
|
mwmathis
| 2022-11-23T00:39:10Z | 0 | 0 | null |
[
"computer_vision",
"pose_estimation",
"arxiv:2103.13282",
"license:lgpl-3.0",
"region:us"
] | null | 2022-11-23T00:38:27Z |
---
license: lgpl-3.0
tags:
- computer_vision
- pose_estimation
---
Model from Joska et al. 2021 ICRA please cite: https://arxiv.org/abs/2103.13282
|
jeapaul/wav2vec2-base-torgo-demo-m04-nolm
|
jeapaul
| 2022-11-23T00:14:40Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-16T20:01:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-torgo-demo-m04-nolm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-torgo-demo-m04-nolm
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5735
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---:|
| 3.431 | 0.88 | 500 | 4.5567 | 1.0 |
| 3.4727 | 1.75 | 1000 | 3.5626 | 1.0 |
| 3.3879 | 2.63 | 1500 | 3.9274 | 1.0 |
| 3.3513 | 3.5 | 2000 | 3.4813 | 1.0 |
| 3.3538 | 4.38 | 2500 | 3.7300 | 1.0 |
| 3.3539 | 5.25 | 3000 | 3.5714 | 1.0 |
| 3.339 | 6.13 | 3500 | 3.6732 | 1.0 |
| 3.3038 | 7.01 | 4000 | 3.6788 | 1.0 |
| 3.35 | 7.88 | 4500 | 3.6715 | 1.0 |
| 3.338 | 8.76 | 5000 | 3.5161 | 1.0 |
| 3.3306 | 9.63 | 5500 | 3.7386 | 1.0 |
| 3.3266 | 10.51 | 6000 | 3.4908 | 1.0 |
| 3.3184 | 11.38 | 6500 | 3.7669 | 1.0 |
| 3.3189 | 12.26 | 7000 | 3.6142 | 1.0 |
| 3.331 | 13.13 | 7500 | 3.5619 | 1.0 |
| 3.3139 | 14.01 | 8000 | 3.6632 | 1.0 |
| 3.3069 | 14.89 | 8500 | 3.6127 | 1.0 |
| 3.315 | 15.76 | 9000 | 3.5562 | 1.0 |
| 3.3079 | 16.64 | 9500 | 3.7094 | 1.0 |
| 3.3077 | 17.51 | 10000 | 3.5412 | 1.0 |
| 3.3188 | 18.39 | 10500 | 3.6303 | 1.0 |
| 3.3133 | 19.26 | 11000 | 3.5704 | 1.0 |
| 3.3428 | 20.14 | 11500 | 3.5662 | 1.0 |
| 3.3082 | 21.02 | 12000 | 3.6084 | 1.0 |
| 3.3238 | 21.89 | 12500 | 3.6164 | 1.0 |
| 3.3119 | 22.77 | 13000 | 3.5787 | 1.0 |
| 3.2981 | 23.64 | 13500 | 3.6356 | 1.0 |
| 3.3153 | 24.52 | 14000 | 3.5726 | 1.0 |
| 3.3065 | 25.39 | 14500 | 3.5908 | 1.0 |
| 3.3199 | 26.27 | 15000 | 3.5823 | 1.0 |
| 3.306 | 27.15 | 15500 | 3.5658 | 1.0 |
| 3.3153 | 28.02 | 16000 | 3.5818 | 1.0 |
| 3.2762 | 28.9 | 16500 | 3.5810 | 1.0 |
| 3.3196 | 29.77 | 17000 | 3.5735 | 1.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.0.0
- Tokenizers 0.13.2
|
sacculifer/dimbat_disaster_distilbert
|
sacculifer
| 2022-11-22T22:05:36Z | 62 | 1 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-05T19:26:40Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: tmp_isorz6_
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Tweets disaster detection model
This model was trained from part of Disaster Tweet Corpus 2020 (Analysis of Filtering Models for Disaster-Related Tweets, Wiegmann,M. et al, 2020) dataset
It achieves the following results on the evaluation set:
- Train Loss: 0.1400
- Train Accuracy: 0.9516
- Validation Loss: 0.1995
- Validation Accuracy: 0.9324
- Epoch: 2
## Model description
Labels
<br>
not disaster --- 0
<br>
disaster --- 1
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer:
<br>
batch_size = 16
<br>
num_epochs = 5
<br>
batches_per_epoch = len(tokenized_tweet["train"])//batch_size
<br>
total_train_steps = int(batches_per_epoch * num_epochs)
<br>
optimizer, schedule = create_optimizer(init_lr=2e-5, num_warmup_steps=0, num_train_steps=total_train_steps)
- training_precision: float32
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.9.2
- Datasets 2.4.0
- Tokenizers 0.12.1
### How to use it
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("sacculifer/dimbat_disaster_distilbert")
model = TFAutoModelForSequenceClassification.from_pretrained("sacculifer/dimbat_disaster_distilbert")
|
utkarshbelkhede/t5-small-sec-10K
|
utkarshbelkhede
| 2022-11-22T21:26:14Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-22T15:40:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-sec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-sec
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0856
- Rouge1: 32.2284
- Rouge2: 28.534
- Rougel: 31.5055
- Rougelsum: 31.5557
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 7 | 4.6983 | 11.362 | 2.7982 | 8.7377 | 9.7976 | 18.98 |
| No log | 2.0 | 14 | 4.2258 | 12.0011 | 3.5612 | 9.3131 | 10.4507 | 18.98 |
| No log | 3.0 | 21 | 3.8453 | 11.8522 | 3.4893 | 9.1555 | 10.2755 | 18.98 |
| No log | 4.0 | 28 | 3.5885 | 12.3065 | 4.0008 | 9.7828 | 10.8749 | 18.98 |
| No log | 5.0 | 35 | 3.4236 | 12.7682 | 4.2469 | 10.3591 | 11.4642 | 18.98 |
| No log | 6.0 | 42 | 3.2760 | 13.6201 | 4.9127 | 11.564 | 12.2789 | 18.98 |
| No log | 7.0 | 49 | 3.1441 | 12.8404 | 4.2904 | 11.0183 | 11.5934 | 18.98 |
| No log | 8.0 | 56 | 3.0378 | 12.9692 | 4.8361 | 11.1002 | 11.793 | 18.98 |
| No log | 9.0 | 63 | 2.9405 | 13.7953 | 5.4215 | 11.7945 | 12.5504 | 18.98 |
| No log | 10.0 | 70 | 2.8531 | 13.7016 | 5.3292 | 11.4372 | 12.4143 | 18.98 |
| No log | 11.0 | 77 | 2.7763 | 14.1725 | 5.8704 | 12.0214 | 13.062 | 18.98 |
| No log | 12.0 | 84 | 2.7021 | 14.8748 | 6.2724 | 12.7188 | 13.9306 | 18.98 |
| No log | 13.0 | 91 | 2.6352 | 15.153 | 6.7464 | 13.1611 | 14.2163 | 18.98 |
| No log | 14.0 | 98 | 2.5728 | 15.7556 | 7.3286 | 13.7175 | 14.7632 | 18.98 |
| No log | 15.0 | 105 | 2.5157 | 15.934 | 7.3678 | 13.8633 | 14.9156 | 18.98 |
| No log | 16.0 | 112 | 2.4617 | 15.8061 | 7.3323 | 13.7464 | 14.7774 | 18.98 |
| No log | 17.0 | 119 | 2.4082 | 16.0665 | 7.5165 | 13.9392 | 14.9721 | 18.86 |
| No log | 18.0 | 126 | 2.3633 | 16.0633 | 7.4792 | 13.9652 | 14.9779 | 18.86 |
| No log | 19.0 | 133 | 2.3129 | 15.5809 | 6.8635 | 13.4883 | 14.4031 | 18.86 |
| No log | 20.0 | 140 | 2.2642 | 15.0965 | 6.6965 | 12.9499 | 13.9616 | 18.86 |
| No log | 21.0 | 147 | 2.2172 | 15.9807 | 7.581 | 13.7652 | 14.7561 | 18.86 |
| No log | 22.0 | 154 | 2.1728 | 16.0223 | 7.3494 | 13.6557 | 14.8175 | 18.98 |
| No log | 23.0 | 161 | 2.1288 | 15.8624 | 7.3123 | 13.5385 | 14.7155 | 18.98 |
| No log | 24.0 | 168 | 2.0880 | 15.6815 | 7.2739 | 13.4081 | 14.5708 | 18.98 |
| No log | 25.0 | 175 | 2.0464 | 15.7728 | 7.2739 | 13.4408 | 14.6141 | 18.98 |
| No log | 26.0 | 182 | 2.0058 | 16.0941 | 7.7024 | 13.9582 | 15.0287 | 19.0 |
| No log | 27.0 | 189 | 1.9649 | 16.2728 | 7.7024 | 14.016 | 15.1315 | 19.0 |
| No log | 28.0 | 196 | 1.9242 | 16.3716 | 7.5627 | 14.0692 | 14.9967 | 19.0 |
| No log | 29.0 | 203 | 1.8868 | 16.7062 | 8.0777 | 14.4908 | 15.3399 | 19.0 |
| No log | 30.0 | 210 | 1.8492 | 17.0537 | 8.5578 | 14.9207 | 15.733 | 19.0 |
| No log | 31.0 | 217 | 1.8141 | 17.4443 | 8.73 | 15.0351 | 16.0924 | 19.0 |
| No log | 32.0 | 224 | 1.7791 | 17.4203 | 8.7258 | 15.0247 | 16.0522 | 19.0 |
| No log | 33.0 | 231 | 1.7435 | 17.5906 | 8.8872 | 15.425 | 16.3617 | 19.0 |
| No log | 34.0 | 238 | 1.7118 | 17.5006 | 8.8774 | 15.3052 | 16.2158 | 19.0 |
| No log | 35.0 | 245 | 1.6789 | 17.8356 | 9.3694 | 15.6864 | 16.5223 | 19.0 |
| No log | 36.0 | 252 | 1.6519 | 18.167 | 9.8435 | 16.1156 | 17.0117 | 19.0 |
| No log | 37.0 | 259 | 1.6209 | 18.4921 | 10.1301 | 16.3481 | 17.2986 | 19.0 |
| No log | 38.0 | 266 | 1.5897 | 18.1784 | 9.9809 | 16.2313 | 17.2703 | 19.0 |
| No log | 39.0 | 273 | 1.5591 | 18.3933 | 10.1286 | 16.3521 | 17.3927 | 19.0 |
| No log | 40.0 | 280 | 1.5272 | 18.6151 | 10.3291 | 16.6078 | 17.7778 | 19.0 |
| No log | 41.0 | 287 | 1.4980 | 19.3033 | 11.0918 | 17.3141 | 18.4394 | 19.0 |
| No log | 42.0 | 294 | 1.4677 | 19.4567 | 11.1469 | 17.3278 | 18.6073 | 19.0 |
| No log | 43.0 | 301 | 1.4390 | 19.4743 | 11.2466 | 17.4536 | 18.6962 | 19.0 |
| No log | 44.0 | 308 | 1.4102 | 19.6048 | 11.2731 | 17.3539 | 18.6521 | 19.0 |
| No log | 45.0 | 315 | 1.3801 | 19.6608 | 11.4561 | 17.4567 | 18.8374 | 19.0 |
| No log | 46.0 | 322 | 1.3495 | 20.1292 | 12.002 | 17.7646 | 19.1702 | 19.0 |
| No log | 47.0 | 329 | 1.3201 | 20.4712 | 12.4172 | 18.1381 | 19.4745 | 19.0 |
| No log | 48.0 | 336 | 1.2926 | 20.8209 | 12.5027 | 18.4521 | 19.8635 | 19.0 |
| No log | 49.0 | 343 | 1.2651 | 21.1144 | 12.8328 | 18.7545 | 20.1599 | 19.0 |
| No log | 50.0 | 350 | 1.2386 | 20.986 | 12.5814 | 18.6581 | 20.0825 | 19.0 |
| No log | 51.0 | 357 | 1.2141 | 21.1851 | 12.6943 | 18.7884 | 20.1736 | 19.0 |
| No log | 52.0 | 364 | 1.1894 | 21.2413 | 12.8142 | 18.8696 | 20.2067 | 19.0 |
| No log | 53.0 | 371 | 1.1649 | 21.7568 | 13.5278 | 19.6002 | 20.8412 | 19.0 |
| No log | 54.0 | 378 | 1.1384 | 22.4831 | 14.5218 | 20.1422 | 21.5559 | 19.0 |
| No log | 55.0 | 385 | 1.1157 | 22.6313 | 14.8353 | 20.3762 | 21.7705 | 19.0 |
| No log | 56.0 | 392 | 1.0953 | 22.5042 | 14.6022 | 20.2466 | 21.582 | 19.0 |
| No log | 57.0 | 399 | 1.0747 | 22.5145 | 14.7475 | 20.2386 | 21.6141 | 19.0 |
| No log | 58.0 | 406 | 1.0559 | 22.7369 | 14.8047 | 20.2974 | 21.7249 | 19.0 |
| No log | 59.0 | 413 | 1.0372 | 22.9126 | 14.9207 | 20.457 | 21.8601 | 19.0 |
| No log | 60.0 | 420 | 1.0195 | 22.8047 | 15.1019 | 20.4638 | 21.7913 | 19.0 |
| No log | 61.0 | 427 | 1.0015 | 22.7677 | 15.1019 | 20.4523 | 21.6938 | 19.0 |
| No log | 62.0 | 434 | 0.9835 | 22.8638 | 15.2116 | 20.5492 | 21.8304 | 19.0 |
| No log | 63.0 | 441 | 0.9655 | 23.2814 | 15.6409 | 20.8081 | 22.264 | 19.0 |
| No log | 64.0 | 448 | 0.9482 | 23.4252 | 15.8487 | 20.9933 | 22.4011 | 19.0 |
| No log | 65.0 | 455 | 0.9297 | 23.1092 | 15.6467 | 20.9232 | 22.1535 | 19.0 |
| No log | 66.0 | 462 | 0.9111 | 23.1047 | 15.6467 | 20.8809 | 22.148 | 19.0 |
| No log | 67.0 | 469 | 0.8930 | 23.6157 | 15.7791 | 21.0336 | 22.4882 | 19.0 |
| No log | 68.0 | 476 | 0.8758 | 23.7294 | 15.8868 | 21.0767 | 22.5998 | 19.0 |
| No log | 69.0 | 483 | 0.8600 | 23.6303 | 15.9537 | 21.3186 | 22.5258 | 19.0 |
| No log | 70.0 | 490 | 0.8457 | 24.0211 | 16.3344 | 21.6141 | 22.8646 | 19.0 |
| No log | 71.0 | 497 | 0.8306 | 24.4543 | 16.7445 | 22.22 | 23.35 | 19.0 |
| 2.234 | 72.0 | 504 | 0.8169 | 24.3446 | 16.5757 | 22.0443 | 23.1601 | 18.94 |
| 2.234 | 73.0 | 511 | 0.8028 | 24.6037 | 16.8537 | 22.254 | 23.4177 | 19.0 |
| 2.234 | 74.0 | 518 | 0.7893 | 24.5006 | 16.93 | 22.3802 | 23.3089 | 19.0 |
| 2.234 | 75.0 | 525 | 0.7767 | 24.5641 | 17.1414 | 22.439 | 23.3614 | 19.0 |
| 2.234 | 76.0 | 532 | 0.7628 | 24.4938 | 17.1622 | 22.4595 | 23.4953 | 19.0 |
| 2.234 | 77.0 | 539 | 0.7491 | 24.4955 | 17.1139 | 22.5084 | 23.4422 | 19.0 |
| 2.234 | 78.0 | 546 | 0.7370 | 25.2992 | 17.7973 | 23.4208 | 24.0642 | 19.0 |
| 2.234 | 79.0 | 553 | 0.7264 | 25.3397 | 17.6927 | 23.4483 | 24.1897 | 18.94 |
| 2.234 | 80.0 | 560 | 0.7171 | 25.2813 | 17.5431 | 23.371 | 24.0918 | 18.94 |
| 2.234 | 81.0 | 567 | 0.7065 | 24.8028 | 17.3248 | 23.0219 | 23.6579 | 19.0 |
| 2.234 | 82.0 | 574 | 0.6955 | 25.2603 | 17.6915 | 23.322 | 24.0599 | 18.94 |
| 2.234 | 83.0 | 581 | 0.6850 | 25.5258 | 17.8746 | 23.6253 | 24.4615 | 18.94 |
| 2.234 | 84.0 | 588 | 0.6753 | 25.5363 | 17.9781 | 23.7546 | 24.4257 | 18.94 |
| 2.234 | 85.0 | 595 | 0.6658 | 25.3495 | 18.1089 | 23.703 | 24.282 | 19.0 |
| 2.234 | 86.0 | 602 | 0.6569 | 25.0708 | 17.8801 | 23.4282 | 23.9078 | 18.94 |
| 2.234 | 87.0 | 609 | 0.6489 | 25.0266 | 17.9922 | 23.422 | 23.9164 | 18.94 |
| 2.234 | 88.0 | 616 | 0.6407 | 25.0172 | 18.0199 | 23.4155 | 23.9337 | 19.0 |
| 2.234 | 89.0 | 623 | 0.6317 | 24.922 | 17.9857 | 23.3527 | 23.8011 | 19.0 |
| 2.234 | 90.0 | 630 | 0.6234 | 24.9009 | 17.9847 | 23.2866 | 23.8712 | 19.0 |
| 2.234 | 91.0 | 637 | 0.6154 | 24.8534 | 18.0524 | 23.2679 | 23.8242 | 19.0 |
| 2.234 | 92.0 | 644 | 0.6082 | 24.9376 | 18.0509 | 23.3574 | 23.8951 | 19.0 |
| 2.234 | 93.0 | 651 | 0.6004 | 25.0 | 18.1129 | 23.4513 | 23.9827 | 19.0 |
| 2.234 | 94.0 | 658 | 0.5934 | 24.8637 | 17.7982 | 23.115 | 23.7761 | 19.0 |
| 2.234 | 95.0 | 665 | 0.5865 | 24.5734 | 17.5708 | 22.8594 | 23.5395 | 19.0 |
| 2.234 | 96.0 | 672 | 0.5793 | 24.6743 | 17.8841 | 23.0139 | 23.5864 | 19.0 |
| 2.234 | 97.0 | 679 | 0.5722 | 25.1153 | 18.5566 | 23.5382 | 24.0676 | 19.0 |
| 2.234 | 98.0 | 686 | 0.5664 | 25.2336 | 18.6306 | 23.5665 | 24.2091 | 19.0 |
| 2.234 | 99.0 | 693 | 0.5606 | 25.8403 | 19.2544 | 24.1043 | 24.827 | 19.0 |
| 2.234 | 100.0 | 700 | 0.5547 | 25.8401 | 19.3103 | 24.1723 | 24.8189 | 19.0 |
| 2.234 | 101.0 | 707 | 0.5489 | 25.9165 | 19.7932 | 24.4001 | 25.0214 | 19.0 |
| 2.234 | 102.0 | 714 | 0.5427 | 26.1503 | 20.1415 | 24.672 | 25.2171 | 19.0 |
| 2.234 | 103.0 | 721 | 0.5372 | 26.2728 | 20.1751 | 24.7661 | 25.3402 | 19.0 |
| 2.234 | 104.0 | 728 | 0.5321 | 26.3086 | 20.2377 | 24.7661 | 25.3768 | 19.0 |
| 2.234 | 105.0 | 735 | 0.5272 | 26.3324 | 20.1971 | 24.741 | 25.4227 | 19.0 |
| 2.234 | 106.0 | 742 | 0.5221 | 26.6528 | 20.8582 | 25.1293 | 25.8014 | 19.0 |
| 2.234 | 107.0 | 749 | 0.5161 | 26.6946 | 20.8596 | 25.0726 | 25.8291 | 19.0 |
| 2.234 | 108.0 | 756 | 0.5114 | 26.59 | 20.8571 | 25.1594 | 25.7803 | 19.0 |
| 2.234 | 109.0 | 763 | 0.5070 | 26.5239 | 20.6469 | 25.049 | 25.6539 | 19.0 |
| 2.234 | 110.0 | 770 | 0.5027 | 26.5239 | 20.6263 | 25.049 | 25.6257 | 19.0 |
| 2.234 | 111.0 | 777 | 0.4977 | 26.6538 | 20.909 | 25.1895 | 25.8624 | 19.0 |
| 2.234 | 112.0 | 784 | 0.4927 | 26.6828 | 20.7963 | 25.172 | 25.8074 | 19.0 |
| 2.234 | 113.0 | 791 | 0.4872 | 26.6042 | 20.7493 | 25.0792 | 25.7606 | 19.0 |
| 2.234 | 114.0 | 798 | 0.4820 | 26.3124 | 20.2776 | 24.7171 | 25.3684 | 19.0 |
| 2.234 | 115.0 | 805 | 0.4779 | 26.5558 | 20.4997 | 24.8879 | 25.5925 | 19.0 |
| 2.234 | 116.0 | 812 | 0.4736 | 26.2154 | 20.2546 | 24.6121 | 25.3458 | 19.0 |
| 2.234 | 117.0 | 819 | 0.4691 | 26.2652 | 20.2177 | 24.7039 | 25.3086 | 19.0 |
| 2.234 | 118.0 | 826 | 0.4658 | 26.2129 | 20.154 | 24.6656 | 25.2793 | 19.0 |
| 2.234 | 119.0 | 833 | 0.4623 | 26.4794 | 20.4029 | 24.8631 | 25.5696 | 19.0 |
| 2.234 | 120.0 | 840 | 0.4582 | 26.3077 | 20.2257 | 24.7431 | 25.3879 | 19.0 |
| 2.234 | 121.0 | 847 | 0.4545 | 26.0652 | 19.935 | 24.5384 | 25.097 | 19.0 |
| 2.234 | 122.0 | 854 | 0.4501 | 26.361 | 20.292 | 24.7871 | 25.452 | 19.0 |
| 2.234 | 123.0 | 861 | 0.4463 | 26.361 | 20.292 | 24.7871 | 25.452 | 19.0 |
| 2.234 | 124.0 | 868 | 0.4433 | 26.3758 | 20.351 | 24.7589 | 25.4636 | 19.0 |
| 2.234 | 125.0 | 875 | 0.4399 | 26.3758 | 20.351 | 24.7589 | 25.4636 | 19.0 |
| 2.234 | 126.0 | 882 | 0.4365 | 26.3459 | 20.292 | 24.7834 | 25.4484 | 19.0 |
| 2.234 | 127.0 | 889 | 0.4337 | 26.3229 | 20.2924 | 24.7529 | 25.445 | 19.0 |
| 2.234 | 128.0 | 896 | 0.4310 | 26.3229 | 20.2924 | 24.7529 | 25.445 | 19.0 |
| 2.234 | 129.0 | 903 | 0.4280 | 26.361 | 20.292 | 24.759 | 25.452 | 19.0 |
| 2.234 | 130.0 | 910 | 0.4251 | 26.361 | 20.292 | 24.759 | 25.452 | 19.0 |
| 2.234 | 131.0 | 917 | 0.4219 | 26.2313 | 20.0755 | 24.5457 | 25.2876 | 19.0 |
| 2.234 | 132.0 | 924 | 0.4190 | 26.3448 | 20.2413 | 24.5632 | 25.3904 | 19.0 |
| 2.234 | 133.0 | 931 | 0.4161 | 26.2977 | 20.2013 | 24.6035 | 25.3575 | 19.0 |
| 2.234 | 134.0 | 938 | 0.4125 | 26.9053 | 20.8956 | 25.1115 | 25.8695 | 19.0 |
| 2.234 | 135.0 | 945 | 0.4094 | 27.0423 | 20.9187 | 25.2399 | 25.977 | 19.0 |
| 2.234 | 136.0 | 952 | 0.4061 | 26.941 | 20.9813 | 25.0791 | 25.8246 | 19.0 |
| 2.234 | 137.0 | 959 | 0.4032 | 26.941 | 20.9813 | 25.0791 | 25.8246 | 19.0 |
| 2.234 | 138.0 | 966 | 0.4005 | 26.7839 | 20.9539 | 24.9493 | 25.735 | 19.0 |
| 2.234 | 139.0 | 973 | 0.3981 | 26.8264 | 20.9522 | 24.9475 | 25.7656 | 19.0 |
| 2.234 | 140.0 | 980 | 0.3950 | 27.1217 | 21.3657 | 25.2847 | 26.0664 | 19.0 |
| 2.234 | 141.0 | 987 | 0.3917 | 26.8529 | 21.3392 | 25.2223 | 25.8628 | 19.0 |
| 2.234 | 142.0 | 994 | 0.3891 | 26.9542 | 21.3392 | 25.3029 | 25.9634 | 19.0 |
| 0.8247 | 143.0 | 1001 | 0.3872 | 26.9542 | 21.3392 | 25.3029 | 25.9634 | 19.0 |
| 0.8247 | 144.0 | 1008 | 0.3851 | 26.954 | 21.339 | 25.1999 | 25.9115 | 19.0 |
| 0.8247 | 145.0 | 1015 | 0.3828 | 26.954 | 21.339 | 25.1999 | 25.9115 | 19.0 |
| 0.8247 | 146.0 | 1022 | 0.3795 | 27.211 | 21.7609 | 25.5337 | 26.2491 | 19.0 |
| 0.8247 | 147.0 | 1029 | 0.3765 | 27.5119 | 21.8162 | 25.773 | 26.4442 | 19.0 |
| 0.8247 | 148.0 | 1036 | 0.3747 | 27.5147 | 21.8166 | 25.816 | 26.4261 | 19.0 |
| 0.8247 | 149.0 | 1043 | 0.3721 | 27.11 | 21.2671 | 25.3668 | 25.9832 | 19.0 |
| 0.8247 | 150.0 | 1050 | 0.3695 | 27.011 | 21.3523 | 25.275 | 25.9849 | 19.0 |
| 0.8247 | 151.0 | 1057 | 0.3667 | 27.011 | 21.3523 | 25.275 | 25.9849 | 19.0 |
| 0.8247 | 152.0 | 1064 | 0.3643 | 26.8762 | 21.3229 | 25.2291 | 25.8448 | 19.0 |
| 0.8247 | 153.0 | 1071 | 0.3619 | 26.7423 | 21.3148 | 25.1436 | 25.7247 | 19.0 |
| 0.8247 | 154.0 | 1078 | 0.3597 | 27.2285 | 21.7893 | 25.5016 | 26.1363 | 19.0 |
| 0.8247 | 155.0 | 1085 | 0.3569 | 26.9347 | 21.4481 | 25.202 | 25.9288 | 19.0 |
| 0.8247 | 156.0 | 1092 | 0.3542 | 26.8073 | 21.4074 | 25.164 | 25.8427 | 19.0 |
| 0.8247 | 157.0 | 1099 | 0.3523 | 26.8585 | 21.4484 | 25.3552 | 26.1027 | 19.0 |
| 0.8247 | 158.0 | 1106 | 0.3501 | 26.8874 | 21.4484 | 25.4233 | 26.1418 | 19.0 |
| 0.8247 | 159.0 | 1113 | 0.3481 | 26.3889 | 20.7315 | 24.9697 | 25.5298 | 19.0 |
| 0.8247 | 160.0 | 1120 | 0.3462 | 26.4141 | 20.7382 | 24.9742 | 25.5443 | 19.0 |
| 0.8247 | 161.0 | 1127 | 0.3444 | 26.4434 | 20.7724 | 24.94 | 25.4982 | 19.0 |
| 0.8247 | 162.0 | 1134 | 0.3421 | 26.44 | 20.7714 | 24.9389 | 25.4971 | 19.0 |
| 0.8247 | 163.0 | 1141 | 0.3400 | 26.4885 | 20.8024 | 24.954 | 25.5336 | 19.0 |
| 0.8247 | 164.0 | 1148 | 0.3371 | 26.8424 | 21.4757 | 25.3475 | 26.025 | 19.0 |
| 0.8247 | 165.0 | 1155 | 0.3348 | 26.6869 | 21.3582 | 25.1949 | 25.8305 | 19.0 |
| 0.8247 | 166.0 | 1162 | 0.3328 | 26.7864 | 21.3582 | 25.3004 | 25.9217 | 19.0 |
| 0.8247 | 167.0 | 1169 | 0.3307 | 26.4961 | 21.3053 | 25.0805 | 25.6481 | 19.0 |
| 0.8247 | 168.0 | 1176 | 0.3290 | 26.1855 | 20.7598 | 24.7578 | 25.3158 | 19.0 |
| 0.8247 | 169.0 | 1183 | 0.3276 | 26.1855 | 20.7598 | 24.7578 | 25.3158 | 19.0 |
| 0.8247 | 170.0 | 1190 | 0.3255 | 26.3362 | 20.7593 | 24.7501 | 25.3055 | 19.0 |
| 0.8247 | 171.0 | 1197 | 0.3236 | 26.5342 | 21.3055 | 25.0784 | 25.7001 | 19.0 |
| 0.8247 | 172.0 | 1204 | 0.3219 | 26.1834 | 20.7593 | 24.7567 | 25.3127 | 19.0 |
| 0.8247 | 173.0 | 1211 | 0.3199 | 26.5384 | 21.3057 | 25.0795 | 25.7032 | 19.0 |
| 0.8247 | 174.0 | 1218 | 0.3181 | 26.5384 | 21.3057 | 25.0795 | 25.7032 | 19.0 |
| 0.8247 | 175.0 | 1225 | 0.3163 | 26.4 | 21.2578 | 24.9477 | 25.5661 | 19.0 |
| 0.8247 | 176.0 | 1232 | 0.3144 | 26.5428 | 21.3112 | 24.9866 | 25.6532 | 19.0 |
| 0.8247 | 177.0 | 1239 | 0.3123 | 26.4446 | 21.2931 | 24.9477 | 25.6048 | 19.0 |
| 0.8247 | 178.0 | 1246 | 0.3103 | 26.4446 | 21.2931 | 24.9477 | 25.6048 | 19.0 |
| 0.8247 | 179.0 | 1253 | 0.3086 | 26.4446 | 21.2931 | 24.9477 | 25.6048 | 19.0 |
| 0.8247 | 180.0 | 1260 | 0.3067 | 26.5699 | 21.3383 | 25.0784 | 25.7432 | 19.0 |
| 0.8247 | 181.0 | 1267 | 0.3051 | 26.5342 | 21.3055 | 25.0784 | 25.7001 | 19.0 |
| 0.8247 | 182.0 | 1274 | 0.3033 | 26.5342 | 21.3055 | 25.0784 | 25.7001 | 19.0 |
| 0.8247 | 183.0 | 1281 | 0.3022 | 26.6363 | 21.3383 | 25.0784 | 25.7852 | 19.0 |
| 0.8247 | 184.0 | 1288 | 0.3009 | 26.5699 | 21.3383 | 25.0784 | 25.7432 | 19.0 |
| 0.8247 | 185.0 | 1295 | 0.2994 | 26.4861 | 21.3383 | 25.0215 | 25.6423 | 19.0 |
| 0.8247 | 186.0 | 1302 | 0.2972 | 26.5699 | 21.3383 | 25.0784 | 25.7432 | 19.0 |
| 0.8247 | 187.0 | 1309 | 0.2953 | 26.5364 | 21.3383 | 25.0287 | 25.7335 | 19.0 |
| 0.8247 | 188.0 | 1316 | 0.2933 | 26.4919 | 21.2931 | 24.978 | 25.6755 | 19.0 |
| 0.8247 | 189.0 | 1323 | 0.2917 | 26.4919 | 21.2931 | 24.978 | 25.6755 | 19.0 |
| 0.8247 | 190.0 | 1330 | 0.2903 | 26.4965 | 21.2937 | 24.9822 | 25.6765 | 19.0 |
| 0.8247 | 191.0 | 1337 | 0.2886 | 26.4965 | 21.2937 | 24.9822 | 25.6765 | 19.0 |
| 0.8247 | 192.0 | 1344 | 0.2871 | 26.4965 | 21.2937 | 24.9822 | 25.6765 | 19.0 |
| 0.8247 | 193.0 | 1351 | 0.2857 | 26.4965 | 21.2937 | 24.9822 | 25.6765 | 19.0 |
| 0.8247 | 194.0 | 1358 | 0.2845 | 27.6214 | 22.7746 | 26.212 | 26.7893 | 19.0 |
| 0.8247 | 195.0 | 1365 | 0.2833 | 27.6766 | 22.8377 | 26.2459 | 26.8427 | 19.0 |
| 0.8247 | 196.0 | 1372 | 0.2821 | 26.6668 | 21.412 | 25.0675 | 25.7861 | 19.0 |
| 0.8247 | 197.0 | 1379 | 0.2808 | 26.5377 | 21.3511 | 25.0292 | 25.7345 | 19.0 |
| 0.8247 | 198.0 | 1386 | 0.2794 | 26.5377 | 21.3511 | 25.0292 | 25.7345 | 19.0 |
| 0.8247 | 199.0 | 1393 | 0.2782 | 26.5377 | 21.3511 | 25.0292 | 25.7345 | 19.0 |
| 0.8247 | 200.0 | 1400 | 0.2763 | 27.6214 | 22.8029 | 26.2108 | 26.7873 | 19.0 |
| 0.8247 | 201.0 | 1407 | 0.2745 | 27.6214 | 22.8029 | 26.2108 | 26.7873 | 19.0 |
| 0.8247 | 202.0 | 1414 | 0.2732 | 27.6214 | 22.8029 | 26.2108 | 26.7873 | 19.0 |
| 0.8247 | 203.0 | 1421 | 0.2719 | 27.6141 | 22.7604 | 26.1742 | 26.7845 | 19.0 |
| 0.8247 | 204.0 | 1428 | 0.2708 | 27.6141 | 22.7094 | 26.1748 | 26.7863 | 19.0 |
| 0.8247 | 205.0 | 1435 | 0.2697 | 27.6037 | 22.7094 | 26.1748 | 26.7482 | 19.0 |
| 0.8247 | 206.0 | 1442 | 0.2689 | 27.5437 | 22.7107 | 26.1754 | 26.7281 | 19.0 |
| 0.8247 | 207.0 | 1449 | 0.2683 | 27.685 | 22.7621 | 26.2104 | 26.7859 | 19.0 |
| 0.8247 | 208.0 | 1456 | 0.2671 | 27.7224 | 22.7621 | 26.2104 | 26.823 | 19.0 |
| 0.8247 | 209.0 | 1463 | 0.2657 | 27.6141 | 22.7604 | 26.1742 | 26.7845 | 19.0 |
| 0.8247 | 210.0 | 1470 | 0.2647 | 27.6745 | 22.837 | 26.2428 | 26.8417 | 19.0 |
| 0.8247 | 211.0 | 1477 | 0.2636 | 27.6745 | 22.837 | 26.2428 | 26.8417 | 19.0 |
| 0.8247 | 212.0 | 1484 | 0.2622 | 27.6214 | 22.8027 | 26.2066 | 26.7852 | 19.0 |
| 0.8247 | 213.0 | 1491 | 0.2605 | 27.6214 | 22.8027 | 26.2066 | 26.7852 | 19.0 |
| 0.8247 | 214.0 | 1498 | 0.2590 | 27.6214 | 22.8027 | 26.2066 | 26.7852 | 19.0 |
| 0.4848 | 215.0 | 1505 | 0.2577 | 27.6214 | 22.8027 | 26.2066 | 26.7852 | 19.0 |
| 0.4848 | 216.0 | 1512 | 0.2561 | 27.6141 | 22.7099 | 26.1772 | 26.7876 | 19.0 |
| 0.4848 | 217.0 | 1519 | 0.2545 | 27.6141 | 22.7099 | 26.1772 | 26.7876 | 19.0 |
| 0.4848 | 218.0 | 1526 | 0.2530 | 28.2128 | 23.39 | 26.725 | 27.4618 | 19.0 |
| 0.4848 | 219.0 | 1533 | 0.2516 | 28.2113 | 23.4341 | 26.725 | 27.4547 | 19.0 |
| 0.4848 | 220.0 | 1540 | 0.2508 | 28.2113 | 23.4341 | 26.725 | 27.4547 | 19.0 |
| 0.4848 | 221.0 | 1547 | 0.2497 | 28.2113 | 23.4341 | 26.725 | 27.4547 | 19.0 |
| 0.4848 | 222.0 | 1554 | 0.2487 | 28.2113 | 23.4341 | 26.725 | 27.4547 | 19.0 |
| 0.4848 | 223.0 | 1561 | 0.2473 | 28.4621 | 23.6287 | 27.0471 | 27.6486 | 19.0 |
| 0.4848 | 224.0 | 1568 | 0.2457 | 28.4621 | 23.6287 | 27.0471 | 27.6486 | 19.0 |
| 0.4848 | 225.0 | 1575 | 0.2444 | 28.8101 | 24.2509 | 27.5583 | 28.09 | 19.0 |
| 0.4848 | 226.0 | 1582 | 0.2435 | 28.8515 | 24.2509 | 27.5583 | 28.1505 | 19.0 |
| 0.4848 | 227.0 | 1589 | 0.2425 | 28.8515 | 24.2509 | 27.5583 | 28.1505 | 19.0 |
| 0.4848 | 228.0 | 1596 | 0.2417 | 28.8515 | 24.2509 | 27.5583 | 28.1505 | 19.0 |
| 0.4848 | 229.0 | 1603 | 0.2410 | 28.8515 | 24.2509 | 27.5583 | 28.1505 | 19.0 |
| 0.4848 | 230.0 | 1610 | 0.2397 | 28.8515 | 24.2509 | 27.5583 | 28.1505 | 19.0 |
| 0.4848 | 231.0 | 1617 | 0.2380 | 28.8515 | 24.2509 | 27.5583 | 28.1505 | 19.0 |
| 0.4848 | 232.0 | 1624 | 0.2368 | 28.8526 | 24.2514 | 27.5604 | 28.1516 | 19.0 |
| 0.4848 | 233.0 | 1631 | 0.2356 | 28.8526 | 24.2514 | 27.5604 | 28.1516 | 19.0 |
| 0.4848 | 234.0 | 1638 | 0.2344 | 28.8526 | 24.2514 | 27.5604 | 28.1516 | 19.0 |
| 0.4848 | 235.0 | 1645 | 0.2335 | 28.8515 | 24.2509 | 27.5583 | 28.1505 | 19.0 |
| 0.4848 | 236.0 | 1652 | 0.2329 | 28.8515 | 24.2509 | 27.5583 | 28.1505 | 19.0 |
| 0.4848 | 237.0 | 1659 | 0.2323 | 28.8515 | 24.2509 | 27.5583 | 28.1505 | 19.0 |
| 0.4848 | 238.0 | 1666 | 0.2316 | 28.8515 | 24.2509 | 27.5583 | 28.1505 | 19.0 |
| 0.4848 | 239.0 | 1673 | 0.2306 | 28.8515 | 24.2509 | 27.5583 | 28.1505 | 19.0 |
| 0.4848 | 240.0 | 1680 | 0.2296 | 28.8515 | 24.2509 | 27.5583 | 28.1505 | 19.0 |
| 0.4848 | 241.0 | 1687 | 0.2286 | 28.8515 | 24.2509 | 27.5583 | 28.1505 | 19.0 |
| 0.4848 | 242.0 | 1694 | 0.2275 | 28.8101 | 24.2509 | 27.5583 | 28.09 | 19.0 |
| 0.4848 | 243.0 | 1701 | 0.2264 | 28.8101 | 24.2509 | 27.5583 | 28.09 | 19.0 |
| 0.4848 | 244.0 | 1708 | 0.2256 | 28.8515 | 24.2509 | 27.5583 | 28.1505 | 19.0 |
| 0.4848 | 245.0 | 1715 | 0.2248 | 28.8515 | 24.2509 | 27.5583 | 28.1505 | 19.0 |
| 0.4848 | 246.0 | 1722 | 0.2240 | 28.8515 | 24.2509 | 27.5583 | 28.1505 | 19.0 |
| 0.4848 | 247.0 | 1729 | 0.2226 | 28.8515 | 24.2509 | 27.5583 | 28.1505 | 19.0 |
| 0.4848 | 248.0 | 1736 | 0.2213 | 28.7896 | 24.2509 | 27.5183 | 28.0556 | 19.0 |
| 0.4848 | 249.0 | 1743 | 0.2207 | 28.8101 | 24.2509 | 27.5583 | 28.09 | 19.0 |
| 0.4848 | 250.0 | 1750 | 0.2201 | 28.8515 | 24.2509 | 27.5583 | 28.1505 | 19.0 |
| 0.4848 | 251.0 | 1757 | 0.2189 | 30.1056 | 25.7653 | 28.9152 | 29.4717 | 19.0 |
| 0.4848 | 252.0 | 1764 | 0.2178 | 30.1056 | 25.7653 | 28.9152 | 29.4717 | 19.0 |
| 0.4848 | 253.0 | 1771 | 0.2170 | 30.0731 | 25.7653 | 28.9152 | 29.4304 | 19.0 |
| 0.4848 | 254.0 | 1778 | 0.2162 | 30.0731 | 25.7653 | 28.9152 | 29.4304 | 19.0 |
| 0.4848 | 255.0 | 1785 | 0.2154 | 30.1091 | 25.8369 | 28.9446 | 29.4812 | 19.0 |
| 0.4848 | 256.0 | 1792 | 0.2145 | 30.1091 | 25.8369 | 28.9446 | 29.4812 | 19.0 |
| 0.4848 | 257.0 | 1799 | 0.2135 | 30.1328 | 26.0146 | 29.0423 | 29.5189 | 19.0 |
| 0.4848 | 258.0 | 1806 | 0.2127 | 30.1328 | 26.0146 | 29.0423 | 29.5189 | 19.0 |
| 0.4848 | 259.0 | 1813 | 0.2118 | 30.1496 | 25.901 | 28.9818 | 29.4954 | 19.0 |
| 0.4848 | 260.0 | 1820 | 0.2109 | 30.5807 | 26.586 | 29.5567 | 30.027 | 19.0 |
| 0.4848 | 261.0 | 1827 | 0.2099 | 30.1328 | 26.0146 | 29.0423 | 29.5189 | 19.0 |
| 0.4848 | 262.0 | 1834 | 0.2092 | 29.975 | 25.7233 | 28.8868 | 29.3017 | 19.0 |
| 0.4848 | 263.0 | 1841 | 0.2085 | 30.0805 | 25.7221 | 28.8845 | 29.3801 | 19.0 |
| 0.4848 | 264.0 | 1848 | 0.2076 | 30.0805 | 25.7221 | 28.8845 | 29.3801 | 19.0 |
| 0.4848 | 265.0 | 1855 | 0.2067 | 30.5283 | 26.4358 | 29.4239 | 29.9175 | 19.0 |
| 0.4848 | 266.0 | 1862 | 0.2059 | 30.0805 | 25.7221 | 28.8845 | 29.3801 | 19.0 |
| 0.4848 | 267.0 | 1869 | 0.2052 | 30.1084 | 25.7212 | 28.8823 | 29.4363 | 19.0 |
| 0.4848 | 268.0 | 1876 | 0.2042 | 30.082 | 25.7164 | 28.886 | 29.4007 | 19.0 |
| 0.4848 | 269.0 | 1883 | 0.2034 | 30.082 | 25.7164 | 28.886 | 29.4007 | 19.0 |
| 0.4848 | 270.0 | 1890 | 0.2023 | 30.082 | 25.7164 | 28.886 | 29.4007 | 19.0 |
| 0.4848 | 271.0 | 1897 | 0.2015 | 29.9475 | 25.7199 | 28.8905 | 29.2879 | 19.0 |
| 0.4848 | 272.0 | 1904 | 0.2007 | 29.9475 | 25.7199 | 28.8905 | 29.2879 | 19.0 |
| 0.4848 | 273.0 | 1911 | 0.2001 | 29.9475 | 25.7199 | 28.8905 | 29.2879 | 19.0 |
| 0.4848 | 274.0 | 1918 | 0.1996 | 30.4196 | 26.3965 | 29.4251 | 29.7909 | 19.0 |
| 0.4848 | 275.0 | 1925 | 0.1988 | 30.4196 | 26.3965 | 29.4251 | 29.7909 | 19.0 |
| 0.4848 | 276.0 | 1932 | 0.1978 | 30.4196 | 26.3965 | 29.4251 | 29.7909 | 19.0 |
| 0.4848 | 277.0 | 1939 | 0.1972 | 30.4196 | 26.3965 | 29.4251 | 29.7909 | 19.0 |
| 0.4848 | 278.0 | 1946 | 0.1968 | 30.4196 | 26.3965 | 29.4251 | 29.7909 | 19.0 |
| 0.4848 | 279.0 | 1953 | 0.1965 | 30.4196 | 26.3965 | 29.4251 | 29.7909 | 19.0 |
| 0.4848 | 280.0 | 1960 | 0.1959 | 30.4196 | 26.3965 | 29.4251 | 29.7909 | 19.0 |
| 0.4848 | 281.0 | 1967 | 0.1954 | 30.4196 | 26.3965 | 29.4251 | 29.7909 | 19.0 |
| 0.4848 | 282.0 | 1974 | 0.1949 | 30.4196 | 26.3965 | 29.4251 | 29.7909 | 19.0 |
| 0.4848 | 283.0 | 1981 | 0.1945 | 30.4196 | 26.3965 | 29.4251 | 29.7909 | 19.0 |
| 0.4848 | 284.0 | 1988 | 0.1939 | 30.4196 | 26.3377 | 29.3825 | 29.7858 | 19.0 |
| 0.4848 | 285.0 | 1995 | 0.1934 | 30.9381 | 26.9857 | 29.8217 | 30.3088 | 19.0 |
| 0.347 | 286.0 | 2002 | 0.1928 | 31.0936 | 27.0091 | 29.8492 | 30.3918 | 19.0 |
| 0.347 | 287.0 | 2009 | 0.1916 | 30.9887 | 26.9857 | 29.8217 | 30.3483 | 19.0 |
| 0.347 | 288.0 | 2016 | 0.1904 | 30.9096 | 26.8073 | 29.7311 | 30.2601 | 19.0 |
| 0.347 | 289.0 | 2023 | 0.1894 | 30.8466 | 26.8073 | 29.7332 | 30.2227 | 19.0 |
| 0.347 | 290.0 | 2030 | 0.1884 | 30.9396 | 26.9869 | 29.8238 | 30.3109 | 19.0 |
| 0.347 | 291.0 | 2037 | 0.1876 | 30.9898 | 26.9869 | 29.8238 | 30.3493 | 19.0 |
| 0.347 | 292.0 | 2044 | 0.1870 | 30.9898 | 26.9869 | 29.8238 | 30.3493 | 19.0 |
| 0.347 | 293.0 | 2051 | 0.1866 | 30.9315 | 26.945 | 29.7765 | 30.3159 | 19.0 |
| 0.347 | 294.0 | 2058 | 0.1860 | 30.9902 | 27.0338 | 29.8665 | 30.3511 | 19.0 |
| 0.347 | 295.0 | 2065 | 0.1855 | 30.9902 | 27.0338 | 29.8665 | 30.3511 | 19.0 |
| 0.347 | 296.0 | 2072 | 0.1850 | 30.9898 | 26.9869 | 29.8238 | 30.3493 | 19.0 |
| 0.347 | 297.0 | 2079 | 0.1842 | 30.9381 | 26.9857 | 29.8217 | 30.3088 | 19.0 |
| 0.347 | 298.0 | 2086 | 0.1836 | 30.9381 | 26.9857 | 29.8217 | 30.3088 | 19.0 |
| 0.347 | 299.0 | 2093 | 0.1828 | 30.8217 | 26.9232 | 29.7543 | 30.2418 | 19.0 |
| 0.347 | 300.0 | 2100 | 0.1823 | 30.8743 | 26.9232 | 29.7543 | 30.2961 | 19.0 |
| 0.347 | 301.0 | 2107 | 0.1818 | 30.8743 | 26.9232 | 29.7543 | 30.2961 | 19.0 |
| 0.347 | 302.0 | 2114 | 0.1815 | 30.8743 | 26.9232 | 29.7543 | 30.2961 | 19.0 |
| 0.347 | 303.0 | 2121 | 0.1810 | 30.8217 | 26.9232 | 29.7543 | 30.2418 | 19.0 |
| 0.347 | 304.0 | 2128 | 0.1805 | 30.8743 | 26.9232 | 29.7543 | 30.2961 | 19.0 |
| 0.347 | 305.0 | 2135 | 0.1800 | 30.8824 | 26.9766 | 29.7982 | 30.298 | 19.0 |
| 0.347 | 306.0 | 2142 | 0.1794 | 30.8824 | 26.9766 | 29.7982 | 30.298 | 19.0 |
| 0.347 | 307.0 | 2149 | 0.1789 | 30.8824 | 26.9766 | 29.7982 | 30.298 | 19.0 |
| 0.347 | 308.0 | 2156 | 0.1784 | 30.8743 | 26.9232 | 29.7543 | 30.2961 | 19.0 |
| 0.347 | 309.0 | 2163 | 0.1777 | 31.2848 | 27.323 | 30.116 | 30.5512 | 19.0 |
| 0.347 | 310.0 | 2170 | 0.1770 | 31.2848 | 27.323 | 30.116 | 30.5512 | 19.0 |
| 0.347 | 311.0 | 2177 | 0.1767 | 30.9902 | 27.0332 | 29.8646 | 30.3501 | 19.0 |
| 0.347 | 312.0 | 2184 | 0.1762 | 30.9902 | 27.0332 | 29.8646 | 30.3501 | 19.0 |
| 0.347 | 313.0 | 2191 | 0.1758 | 30.9902 | 27.0332 | 29.8646 | 30.3501 | 19.0 |
| 0.347 | 314.0 | 2198 | 0.1754 | 30.9902 | 27.0332 | 29.8646 | 30.3501 | 19.0 |
| 0.347 | 315.0 | 2205 | 0.1749 | 31.2848 | 27.323 | 30.116 | 30.5512 | 19.0 |
| 0.347 | 316.0 | 2212 | 0.1741 | 31.2811 | 27.2769 | 30.0679 | 30.5502 | 19.0 |
| 0.347 | 317.0 | 2219 | 0.1735 | 31.123 | 27.0091 | 29.8492 | 30.4411 | 19.0 |
| 0.347 | 318.0 | 2226 | 0.1729 | 31.123 | 27.0091 | 29.8492 | 30.4411 | 19.0 |
| 0.347 | 319.0 | 2233 | 0.1722 | 31.123 | 27.0091 | 29.8492 | 30.4411 | 19.0 |
| 0.347 | 320.0 | 2240 | 0.1717 | 31.123 | 27.0091 | 29.8492 | 30.4411 | 19.0 |
| 0.347 | 321.0 | 2247 | 0.1711 | 31.4166 | 27.3285 | 30.1176 | 30.6199 | 19.0 |
| 0.347 | 322.0 | 2254 | 0.1706 | 31.3003 | 27.2493 | 30.0134 | 30.4873 | 19.0 |
| 0.347 | 323.0 | 2261 | 0.1704 | 31.3003 | 27.2493 | 30.0134 | 30.4873 | 19.0 |
| 0.347 | 324.0 | 2268 | 0.1700 | 31.3003 | 27.2493 | 30.0134 | 30.4873 | 19.0 |
| 0.347 | 325.0 | 2275 | 0.1697 | 31.3003 | 27.2493 | 30.0134 | 30.4873 | 19.0 |
| 0.347 | 326.0 | 2282 | 0.1694 | 31.3003 | 27.2493 | 30.0134 | 30.4873 | 19.0 |
| 0.347 | 327.0 | 2289 | 0.1690 | 31.3003 | 27.2493 | 30.0134 | 30.4873 | 19.0 |
| 0.347 | 328.0 | 2296 | 0.1687 | 31.3003 | 27.2493 | 30.0134 | 30.4873 | 19.0 |
| 0.347 | 329.0 | 2303 | 0.1682 | 31.3003 | 27.2493 | 30.0134 | 30.4873 | 19.0 |
| 0.347 | 330.0 | 2310 | 0.1677 | 31.3003 | 27.2493 | 30.0134 | 30.4873 | 19.0 |
| 0.347 | 331.0 | 2317 | 0.1671 | 31.3571 | 27.2889 | 30.0694 | 30.5056 | 19.0 |
| 0.347 | 332.0 | 2324 | 0.1666 | 31.2811 | 27.2769 | 30.0679 | 30.5502 | 19.0 |
| 0.347 | 333.0 | 2331 | 0.1663 | 31.2848 | 27.323 | 30.116 | 30.5512 | 19.0 |
| 0.347 | 334.0 | 2338 | 0.1656 | 31.2317 | 27.2791 | 30.0667 | 30.525 | 19.0 |
| 0.347 | 335.0 | 2345 | 0.1649 | 31.1232 | 27.2403 | 30.0005 | 30.4292 | 19.0 |
| 0.347 | 336.0 | 2352 | 0.1645 | 31.1232 | 27.2403 | 30.0005 | 30.4292 | 19.0 |
| 0.347 | 337.0 | 2359 | 0.1642 | 31.1232 | 27.2403 | 30.0005 | 30.4292 | 19.0 |
| 0.347 | 338.0 | 2366 | 0.1639 | 31.1823 | 27.2791 | 30.0519 | 30.4612 | 19.0 |
| 0.347 | 339.0 | 2373 | 0.1632 | 31.1731 | 27.2299 | 30.0025 | 30.458 | 19.0 |
| 0.347 | 340.0 | 2380 | 0.1627 | 31.3571 | 27.2889 | 30.0694 | 30.5056 | 19.0 |
| 0.347 | 341.0 | 2387 | 0.1624 | 31.3571 | 27.2889 | 30.0694 | 30.5056 | 19.0 |
| 0.347 | 342.0 | 2394 | 0.1623 | 31.3571 | 27.2889 | 30.0694 | 30.5056 | 19.0 |
| 0.347 | 343.0 | 2401 | 0.1619 | 31.3571 | 27.2889 | 30.0694 | 30.5056 | 19.0 |
| 0.347 | 344.0 | 2408 | 0.1613 | 31.3571 | 27.2889 | 30.0694 | 30.5056 | 19.0 |
| 0.347 | 345.0 | 2415 | 0.1608 | 31.0547 | 27.1163 | 30.0154 | 30.4102 | 19.0 |
| 0.347 | 346.0 | 2422 | 0.1607 | 31.0547 | 27.1163 | 30.0154 | 30.4102 | 19.0 |
| 0.347 | 347.0 | 2429 | 0.1603 | 31.0547 | 27.1163 | 30.0154 | 30.4102 | 19.0 |
| 0.347 | 348.0 | 2436 | 0.1600 | 31.0547 | 27.1163 | 30.0154 | 30.4102 | 19.0 |
| 0.347 | 349.0 | 2443 | 0.1594 | 31.1731 | 27.2299 | 30.0025 | 30.458 | 19.0 |
| 0.347 | 350.0 | 2450 | 0.1590 | 31.3571 | 27.2889 | 30.0694 | 30.5056 | 19.0 |
| 0.347 | 351.0 | 2457 | 0.1586 | 31.3571 | 27.2889 | 30.0694 | 30.5056 | 19.0 |
| 0.347 | 352.0 | 2464 | 0.1583 | 31.3499 | 27.4223 | 30.2332 | 30.6358 | 19.0 |
| 0.347 | 353.0 | 2471 | 0.1579 | 31.3499 | 27.4223 | 30.2332 | 30.6358 | 19.0 |
| 0.347 | 354.0 | 2478 | 0.1575 | 31.3888 | 27.4223 | 30.2332 | 30.6358 | 19.0 |
| 0.347 | 355.0 | 2485 | 0.1571 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.347 | 356.0 | 2492 | 0.1566 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.347 | 357.0 | 2499 | 0.1561 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 358.0 | 2506 | 0.1556 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 359.0 | 2513 | 0.1549 | 31.2625 | 27.3405 | 30.2832 | 30.6299 | 19.0 |
| 0.2715 | 360.0 | 2520 | 0.1546 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 361.0 | 2527 | 0.1545 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 362.0 | 2534 | 0.1543 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 363.0 | 2541 | 0.1541 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 364.0 | 2548 | 0.1542 | 31.5919 | 27.514 | 30.295 | 30.7228 | 19.0 |
| 0.2715 | 365.0 | 2555 | 0.1540 | 31.5919 | 27.514 | 30.295 | 30.7228 | 19.0 |
| 0.2715 | 366.0 | 2562 | 0.1536 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 367.0 | 2569 | 0.1532 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 368.0 | 2576 | 0.1530 | 31.4959 | 27.4871 | 30.3516 | 30.7743 | 19.0 |
| 0.2715 | 369.0 | 2583 | 0.1526 | 31.4959 | 27.4871 | 30.3516 | 30.7743 | 19.0 |
| 0.2715 | 370.0 | 2590 | 0.1521 | 31.4959 | 27.4871 | 30.3516 | 30.7743 | 19.0 |
| 0.2715 | 371.0 | 2597 | 0.1515 | 31.4959 | 27.4871 | 30.3516 | 30.7743 | 19.0 |
| 0.2715 | 372.0 | 2604 | 0.1510 | 31.4959 | 27.4871 | 30.3516 | 30.7743 | 19.0 |
| 0.2715 | 373.0 | 2611 | 0.1507 | 31.4959 | 27.4871 | 30.3516 | 30.7743 | 19.0 |
| 0.2715 | 374.0 | 2618 | 0.1504 | 31.4959 | 27.4871 | 30.3516 | 30.7743 | 19.0 |
| 0.2715 | 375.0 | 2625 | 0.1500 | 31.4959 | 27.4871 | 30.3516 | 30.7743 | 19.0 |
| 0.2715 | 376.0 | 2632 | 0.1495 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 377.0 | 2639 | 0.1491 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 378.0 | 2646 | 0.1489 | 31.5919 | 27.514 | 30.295 | 30.7228 | 19.0 |
| 0.2715 | 379.0 | 2653 | 0.1486 | 31.5919 | 27.514 | 30.295 | 30.7228 | 19.0 |
| 0.2715 | 380.0 | 2660 | 0.1483 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 381.0 | 2667 | 0.1482 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 382.0 | 2674 | 0.1480 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 383.0 | 2681 | 0.1479 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 384.0 | 2688 | 0.1480 | 31.5919 | 27.514 | 30.295 | 30.7228 | 19.0 |
| 0.2715 | 385.0 | 2695 | 0.1477 | 31.5919 | 27.514 | 30.295 | 30.7228 | 19.0 |
| 0.2715 | 386.0 | 2702 | 0.1476 | 31.5919 | 27.514 | 30.295 | 30.7228 | 19.0 |
| 0.2715 | 387.0 | 2709 | 0.1471 | 31.5919 | 27.514 | 30.295 | 30.7228 | 19.0 |
| 0.2715 | 388.0 | 2716 | 0.1468 | 31.5919 | 27.514 | 30.295 | 30.7228 | 19.0 |
| 0.2715 | 389.0 | 2723 | 0.1467 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 390.0 | 2730 | 0.1463 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 391.0 | 2737 | 0.1460 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 392.0 | 2744 | 0.1457 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 393.0 | 2751 | 0.1453 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 394.0 | 2758 | 0.1448 | 31.5919 | 27.514 | 30.295 | 30.7228 | 19.0 |
| 0.2715 | 395.0 | 2765 | 0.1446 | 31.5919 | 27.514 | 30.295 | 30.7228 | 19.0 |
| 0.2715 | 396.0 | 2772 | 0.1443 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 397.0 | 2779 | 0.1436 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 398.0 | 2786 | 0.1433 | 31.4259 | 27.452 | 30.2729 | 30.6697 | 19.0 |
| 0.2715 | 399.0 | 2793 | 0.1430 | 31.4399 | 27.476 | 30.2554 | 30.6717 | 19.0 |
| 0.2715 | 400.0 | 2800 | 0.1429 | 31.4872 | 27.5228 | 30.2981 | 30.7255 | 19.0 |
| 0.2715 | 401.0 | 2807 | 0.1427 | 31.4872 | 27.5228 | 30.2981 | 30.7255 | 19.0 |
| 0.2715 | 402.0 | 2814 | 0.1424 | 31.5099 | 27.5424 | 30.3241 | 30.744 | 19.0 |
| 0.2715 | 403.0 | 2821 | 0.1422 | 31.5099 | 27.5424 | 30.3241 | 30.744 | 19.0 |
| 0.2715 | 404.0 | 2828 | 0.1420 | 31.5099 | 27.5424 | 30.3241 | 30.744 | 19.0 |
| 0.2715 | 405.0 | 2835 | 0.1419 | 31.6471 | 27.5429 | 30.3257 | 30.77 | 19.0 |
| 0.2715 | 406.0 | 2842 | 0.1417 | 31.6471 | 27.5429 | 30.3257 | 30.77 | 19.0 |
| 0.2715 | 407.0 | 2849 | 0.1414 | 31.6471 | 27.5429 | 30.3257 | 30.77 | 19.0 |
| 0.2715 | 408.0 | 2856 | 0.1410 | 31.5919 | 27.514 | 30.295 | 30.7228 | 19.0 |
| 0.2715 | 409.0 | 2863 | 0.1408 | 31.4463 | 27.3594 | 30.3027 | 30.6642 | 19.0 |
| 0.2715 | 410.0 | 2870 | 0.1406 | 31.4463 | 27.3594 | 30.3027 | 30.6642 | 19.0 |
| 0.2715 | 411.0 | 2877 | 0.1403 | 31.5068 | 27.4117 | 30.3334 | 30.6937 | 19.0 |
| 0.2715 | 412.0 | 2884 | 0.1400 | 31.5468 | 27.456 | 30.3719 | 30.7612 | 19.0 |
| 0.2715 | 413.0 | 2891 | 0.1395 | 31.5014 | 27.4196 | 30.3412 | 30.7303 | 19.0 |
| 0.2715 | 414.0 | 2898 | 0.1393 | 31.5014 | 27.4196 | 30.3412 | 30.7303 | 19.0 |
| 0.2715 | 415.0 | 2905 | 0.1393 | 31.5014 | 27.4196 | 30.3412 | 30.7303 | 19.0 |
| 0.2715 | 416.0 | 2912 | 0.1392 | 31.2855 | 27.3007 | 30.275 | 30.6413 | 19.0 |
| 0.2715 | 417.0 | 2919 | 0.1390 | 31.2232 | 27.2724 | 30.2434 | 30.599 | 19.0 |
| 0.2715 | 418.0 | 2926 | 0.1388 | 31.2232 | 27.2724 | 30.2434 | 30.599 | 19.0 |
| 0.2715 | 419.0 | 2933 | 0.1384 | 31.2232 | 27.2724 | 30.2434 | 30.599 | 19.0 |
| 0.2715 | 420.0 | 2940 | 0.1379 | 31.5156 | 27.5155 | 30.4983 | 30.7383 | 19.0 |
| 0.2715 | 421.0 | 2947 | 0.1374 | 31.5753 | 27.5683 | 30.5421 | 30.7782 | 19.0 |
| 0.2715 | 422.0 | 2954 | 0.1371 | 31.6484 | 27.5932 | 30.5844 | 30.8486 | 19.0 |
| 0.2715 | 423.0 | 2961 | 0.1368 | 31.7452 | 27.6767 | 30.6858 | 30.9443 | 19.0 |
| 0.2715 | 424.0 | 2968 | 0.1365 | 31.7452 | 27.6767 | 30.6858 | 30.9443 | 19.0 |
| 0.2715 | 425.0 | 2975 | 0.1366 | 31.6852 | 27.6514 | 30.6511 | 30.8842 | 19.0 |
| 0.2715 | 426.0 | 2982 | 0.1366 | 31.6194 | 27.6082 | 30.6236 | 30.8361 | 19.0 |
| 0.2715 | 427.0 | 2989 | 0.1365 | 31.5753 | 27.5683 | 30.5421 | 30.7782 | 19.0 |
| 0.2715 | 428.0 | 2996 | 0.1363 | 31.5753 | 27.5683 | 30.5421 | 30.7782 | 19.0 |
| 0.2217 | 429.0 | 3003 | 0.1359 | 31.5156 | 27.5155 | 30.4983 | 30.7383 | 19.0 |
| 0.2217 | 430.0 | 3010 | 0.1357 | 31.5156 | 27.5155 | 30.4983 | 30.7383 | 19.0 |
| 0.2217 | 431.0 | 3017 | 0.1353 | 31.5156 | 27.5155 | 30.4983 | 30.7383 | 19.0 |
| 0.2217 | 432.0 | 3024 | 0.1346 | 31.5932 | 27.513 | 30.4589 | 30.786 | 19.0 |
| 0.2217 | 433.0 | 3031 | 0.1340 | 31.5932 | 27.513 | 30.4589 | 30.786 | 19.0 |
| 0.2217 | 434.0 | 3038 | 0.1336 | 32.0771 | 27.895 | 30.9182 | 31.3334 | 19.0 |
| 0.2217 | 435.0 | 3045 | 0.1332 | 32.1306 | 27.9949 | 30.991 | 31.3535 | 19.0 |
| 0.2217 | 436.0 | 3052 | 0.1330 | 32.0795 | 27.9442 | 30.9518 | 31.3388 | 19.0 |
| 0.2217 | 437.0 | 3059 | 0.1326 | 32.0795 | 27.9442 | 30.9518 | 31.3388 | 19.0 |
| 0.2217 | 438.0 | 3066 | 0.1322 | 32.0795 | 27.9442 | 30.9518 | 31.3388 | 19.0 |
| 0.2217 | 439.0 | 3073 | 0.1318 | 32.0213 | 27.8584 | 30.868 | 31.2781 | 19.0 |
| 0.2217 | 440.0 | 3080 | 0.1314 | 32.0843 | 27.9836 | 30.9569 | 31.333 | 19.0 |
| 0.2217 | 441.0 | 3087 | 0.1312 | 31.8913 | 27.8318 | 30.9259 | 31.216 | 19.0 |
| 0.2217 | 442.0 | 3094 | 0.1312 | 31.8913 | 27.8318 | 30.9259 | 31.216 | 19.0 |
| 0.2217 | 443.0 | 3101 | 0.1311 | 31.8913 | 27.8318 | 30.9259 | 31.216 | 19.0 |
| 0.2217 | 444.0 | 3108 | 0.1310 | 32.0795 | 27.9442 | 30.9518 | 31.3388 | 19.0 |
| 0.2217 | 445.0 | 3115 | 0.1308 | 32.0795 | 27.9442 | 30.9518 | 31.3388 | 19.0 |
| 0.2217 | 446.0 | 3122 | 0.1309 | 32.0795 | 27.9442 | 30.9518 | 31.3388 | 19.0 |
| 0.2217 | 447.0 | 3129 | 0.1308 | 32.0771 | 27.895 | 30.9182 | 31.3334 | 19.0 |
| 0.2217 | 448.0 | 3136 | 0.1306 | 32.0771 | 27.895 | 30.9182 | 31.3334 | 19.0 |
| 0.2217 | 449.0 | 3143 | 0.1303 | 32.0771 | 27.895 | 30.9182 | 31.3334 | 19.0 |
| 0.2217 | 450.0 | 3150 | 0.1300 | 32.0771 | 27.895 | 30.9182 | 31.3334 | 19.0 |
| 0.2217 | 451.0 | 3157 | 0.1297 | 32.0213 | 27.8584 | 30.868 | 31.2781 | 19.0 |
| 0.2217 | 452.0 | 3164 | 0.1296 | 32.0213 | 27.8584 | 30.868 | 31.2781 | 19.0 |
| 0.2217 | 453.0 | 3171 | 0.1294 | 32.0213 | 27.8584 | 30.868 | 31.2781 | 19.0 |
| 0.2217 | 454.0 | 3178 | 0.1291 | 31.8895 | 27.7951 | 30.8705 | 31.2123 | 19.0 |
| 0.2217 | 455.0 | 3185 | 0.1288 | 31.7997 | 27.7363 | 30.8529 | 31.1419 | 19.0 |
| 0.2217 | 456.0 | 3192 | 0.1285 | 31.4638 | 27.4416 | 30.3997 | 30.6841 | 19.0 |
| 0.2217 | 457.0 | 3199 | 0.1280 | 31.4638 | 27.4416 | 30.3997 | 30.6841 | 19.0 |
| 0.2217 | 458.0 | 3206 | 0.1277 | 31.4638 | 27.4416 | 30.3997 | 30.6841 | 19.0 |
| 0.2217 | 459.0 | 3213 | 0.1273 | 31.4638 | 27.4416 | 30.3997 | 30.6841 | 19.0 |
| 0.2217 | 460.0 | 3220 | 0.1272 | 31.4638 | 27.4416 | 30.3997 | 30.6841 | 19.0 |
| 0.2217 | 461.0 | 3227 | 0.1272 | 31.4638 | 27.4416 | 30.3997 | 30.6841 | 19.0 |
| 0.2217 | 462.0 | 3234 | 0.1271 | 31.4638 | 27.4416 | 30.3997 | 30.6841 | 19.0 |
| 0.2217 | 463.0 | 3241 | 0.1271 | 31.4638 | 27.4416 | 30.3997 | 30.6841 | 19.0 |
| 0.2217 | 464.0 | 3248 | 0.1270 | 31.7997 | 27.7363 | 30.8529 | 31.1419 | 19.0 |
| 0.2217 | 465.0 | 3255 | 0.1269 | 31.7997 | 27.7363 | 30.8529 | 31.1419 | 19.0 |
| 0.2217 | 466.0 | 3262 | 0.1266 | 31.7997 | 27.7363 | 30.8529 | 31.1419 | 19.0 |
| 0.2217 | 467.0 | 3269 | 0.1264 | 31.7997 | 27.7363 | 30.8529 | 31.1419 | 19.0 |
| 0.2217 | 468.0 | 3276 | 0.1263 | 31.7997 | 27.7363 | 30.8529 | 31.1419 | 19.0 |
| 0.2217 | 469.0 | 3283 | 0.1261 | 31.7997 | 27.7363 | 30.8529 | 31.1419 | 19.0 |
| 0.2217 | 470.0 | 3290 | 0.1257 | 31.7997 | 27.7363 | 30.8529 | 31.1419 | 19.0 |
| 0.2217 | 471.0 | 3297 | 0.1255 | 31.7997 | 27.7363 | 30.8529 | 31.1419 | 19.0 |
| 0.2217 | 472.0 | 3304 | 0.1252 | 31.7997 | 27.7363 | 30.8529 | 31.1419 | 19.0 |
| 0.2217 | 473.0 | 3311 | 0.1249 | 31.7997 | 27.7363 | 30.8529 | 31.1419 | 19.0 |
| 0.2217 | 474.0 | 3318 | 0.1246 | 31.8895 | 27.7951 | 30.8705 | 31.2123 | 19.0 |
| 0.2217 | 475.0 | 3325 | 0.1243 | 31.8895 | 27.7951 | 30.8705 | 31.2123 | 19.0 |
| 0.2217 | 476.0 | 3332 | 0.1240 | 31.7997 | 27.7363 | 30.8529 | 31.1419 | 19.0 |
| 0.2217 | 477.0 | 3339 | 0.1237 | 31.8403 | 27.7801 | 30.8793 | 31.1503 | 19.0 |
| 0.2217 | 478.0 | 3346 | 0.1235 | 31.8403 | 27.7801 | 30.8793 | 31.1503 | 19.0 |
| 0.2217 | 479.0 | 3353 | 0.1233 | 31.8403 | 27.7801 | 30.8793 | 31.1503 | 19.0 |
| 0.2217 | 480.0 | 3360 | 0.1233 | 31.8403 | 27.7801 | 30.8793 | 31.1503 | 19.0 |
| 0.2217 | 481.0 | 3367 | 0.1231 | 31.8403 | 27.7801 | 30.8793 | 31.1503 | 19.0 |
| 0.2217 | 482.0 | 3374 | 0.1230 | 31.8403 | 27.7801 | 30.8793 | 31.1503 | 19.0 |
| 0.2217 | 483.0 | 3381 | 0.1231 | 31.8403 | 27.7801 | 30.8793 | 31.1503 | 19.0 |
| 0.2217 | 484.0 | 3388 | 0.1231 | 31.8288 | 27.7843 | 30.9031 | 31.1868 | 19.0 |
| 0.2217 | 485.0 | 3395 | 0.1230 | 31.8288 | 27.7843 | 30.9031 | 31.1868 | 19.0 |
| 0.2217 | 486.0 | 3402 | 0.1228 | 31.7997 | 27.7363 | 30.8529 | 31.1419 | 19.0 |
| 0.2217 | 487.0 | 3409 | 0.1226 | 31.8288 | 27.7843 | 30.9031 | 31.1868 | 19.0 |
| 0.2217 | 488.0 | 3416 | 0.1223 | 31.8288 | 27.7843 | 30.9031 | 31.1868 | 19.0 |
| 0.2217 | 489.0 | 3423 | 0.1219 | 31.5136 | 27.4987 | 30.422 | 30.7353 | 19.0 |
| 0.2217 | 490.0 | 3430 | 0.1213 | 31.7997 | 27.7363 | 30.8529 | 31.1419 | 19.0 |
| 0.2217 | 491.0 | 3437 | 0.1209 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.2217 | 492.0 | 3444 | 0.1207 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.2217 | 493.0 | 3451 | 0.1204 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.2217 | 494.0 | 3458 | 0.1203 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.2217 | 495.0 | 3465 | 0.1202 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.2217 | 496.0 | 3472 | 0.1201 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.2217 | 497.0 | 3479 | 0.1201 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.2217 | 498.0 | 3486 | 0.1201 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.2217 | 499.0 | 3493 | 0.1202 | 31.7997 | 27.7363 | 30.8529 | 31.1419 | 19.0 |
| 0.1919 | 500.0 | 3500 | 0.1203 | 31.7997 | 27.7363 | 30.8529 | 31.1419 | 19.0 |
| 0.1919 | 501.0 | 3507 | 0.1201 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.1919 | 502.0 | 3514 | 0.1199 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.1919 | 503.0 | 3521 | 0.1196 | 31.8554 | 27.7844 | 31.0721 | 31.2268 | 19.0 |
| 0.1919 | 504.0 | 3528 | 0.1194 | 31.8944 | 27.8477 | 31.0667 | 31.2807 | 19.0 |
| 0.1919 | 505.0 | 3535 | 0.1192 | 31.5791 | 27.543 | 30.6956 | 30.8772 | 19.0 |
| 0.1919 | 506.0 | 3542 | 0.1190 | 31.9364 | 27.878 | 31.107 | 31.2827 | 19.0 |
| 0.1919 | 507.0 | 3549 | 0.1189 | 31.9364 | 27.878 | 31.107 | 31.2827 | 19.0 |
| 0.1919 | 508.0 | 3556 | 0.1187 | 31.9364 | 27.878 | 31.107 | 31.2827 | 19.0 |
| 0.1919 | 509.0 | 3563 | 0.1184 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.1919 | 510.0 | 3570 | 0.1182 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.1919 | 511.0 | 3577 | 0.1180 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.1919 | 512.0 | 3584 | 0.1178 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.1919 | 513.0 | 3591 | 0.1177 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.1919 | 514.0 | 3598 | 0.1177 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.1919 | 515.0 | 3605 | 0.1175 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.1919 | 516.0 | 3612 | 0.1172 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.1919 | 517.0 | 3619 | 0.1170 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.1919 | 518.0 | 3626 | 0.1167 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.1919 | 519.0 | 3633 | 0.1164 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.1919 | 520.0 | 3640 | 0.1163 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.1919 | 521.0 | 3647 | 0.1161 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.1919 | 522.0 | 3654 | 0.1159 | 31.8554 | 27.7844 | 31.0721 | 31.2268 | 19.0 |
| 0.1919 | 523.0 | 3661 | 0.1160 | 31.7023 | 27.6923 | 30.8243 | 31.0915 | 19.0 |
| 0.1919 | 524.0 | 3668 | 0.1160 | 31.7467 | 27.8062 | 30.8612 | 31.1419 | 19.0 |
| 0.1919 | 525.0 | 3675 | 0.1158 | 31.7467 | 27.8062 | 30.8612 | 31.1419 | 19.0 |
| 0.1919 | 526.0 | 3682 | 0.1157 | 31.8554 | 27.7844 | 31.0721 | 31.2268 | 19.0 |
| 0.1919 | 527.0 | 3689 | 0.1156 | 31.8554 | 27.7844 | 31.0721 | 31.2268 | 19.0 |
| 0.1919 | 528.0 | 3696 | 0.1155 | 31.8554 | 27.7844 | 31.0721 | 31.2268 | 19.0 |
| 0.1919 | 529.0 | 3703 | 0.1153 | 31.8554 | 27.7844 | 31.0721 | 31.2268 | 19.0 |
| 0.1919 | 530.0 | 3710 | 0.1152 | 31.912 | 27.8318 | 31.1148 | 31.2817 | 19.0 |
| 0.1919 | 531.0 | 3717 | 0.1151 | 31.912 | 27.8318 | 31.1148 | 31.2817 | 19.0 |
| 0.1919 | 532.0 | 3724 | 0.1149 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 533.0 | 3731 | 0.1147 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 534.0 | 3738 | 0.1146 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 535.0 | 3745 | 0.1145 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 536.0 | 3752 | 0.1144 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 537.0 | 3759 | 0.1143 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 538.0 | 3766 | 0.1141 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 539.0 | 3773 | 0.1140 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 540.0 | 3780 | 0.1140 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.1919 | 541.0 | 3787 | 0.1140 | 31.9543 | 27.8444 | 31.0871 | 31.2788 | 19.0 |
| 0.1919 | 542.0 | 3794 | 0.1139 | 31.8554 | 27.7844 | 31.0721 | 31.2268 | 19.0 |
| 0.1919 | 543.0 | 3801 | 0.1138 | 31.8554 | 27.7844 | 31.0721 | 31.2268 | 19.0 |
| 0.1919 | 544.0 | 3808 | 0.1137 | 31.8554 | 27.7844 | 31.0721 | 31.2268 | 19.0 |
| 0.1919 | 545.0 | 3815 | 0.1136 | 31.8554 | 27.7844 | 31.0721 | 31.2268 | 19.0 |
| 0.1919 | 546.0 | 3822 | 0.1134 | 31.912 | 27.8318 | 31.1148 | 31.2817 | 19.0 |
| 0.1919 | 547.0 | 3829 | 0.1132 | 31.9747 | 27.8786 | 31.1492 | 31.3314 | 19.0 |
| 0.1919 | 548.0 | 3836 | 0.1131 | 31.9747 | 27.8786 | 31.1492 | 31.3314 | 19.0 |
| 0.1919 | 549.0 | 3843 | 0.1129 | 31.9747 | 27.8786 | 31.1492 | 31.3314 | 19.0 |
| 0.1919 | 550.0 | 3850 | 0.1127 | 31.912 | 27.8318 | 31.1148 | 31.2817 | 19.0 |
| 0.1919 | 551.0 | 3857 | 0.1124 | 31.912 | 27.8318 | 31.1148 | 31.2817 | 19.0 |
| 0.1919 | 552.0 | 3864 | 0.1122 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 553.0 | 3871 | 0.1122 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 554.0 | 3878 | 0.1122 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 555.0 | 3885 | 0.1120 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 556.0 | 3892 | 0.1119 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 557.0 | 3899 | 0.1118 | 31.912 | 27.8318 | 31.1148 | 31.2817 | 19.0 |
| 0.1919 | 558.0 | 3906 | 0.1117 | 31.912 | 27.8318 | 31.1148 | 31.2817 | 19.0 |
| 0.1919 | 559.0 | 3913 | 0.1115 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 560.0 | 3920 | 0.1114 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 561.0 | 3927 | 0.1114 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 562.0 | 3934 | 0.1114 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 563.0 | 3941 | 0.1112 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 564.0 | 3948 | 0.1109 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 565.0 | 3955 | 0.1107 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 566.0 | 3962 | 0.1105 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 567.0 | 3969 | 0.1102 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 568.0 | 3976 | 0.1099 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1919 | 569.0 | 3983 | 0.1098 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1919 | 570.0 | 3990 | 0.1096 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1919 | 571.0 | 3997 | 0.1095 | 31.9849 | 27.874 | 31.1422 | 31.3089 | 19.0 |
| 0.1677 | 572.0 | 4004 | 0.1093 | 31.9849 | 27.874 | 31.1422 | 31.3089 | 19.0 |
| 0.1677 | 573.0 | 4011 | 0.1093 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1677 | 574.0 | 4018 | 0.1094 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1677 | 575.0 | 4025 | 0.1095 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1677 | 576.0 | 4032 | 0.1095 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1677 | 577.0 | 4039 | 0.1094 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1677 | 578.0 | 4046 | 0.1092 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1677 | 579.0 | 4053 | 0.1090 | 31.9982 | 27.874 | 31.1397 | 31.3293 | 19.0 |
| 0.1677 | 580.0 | 4060 | 0.1088 | 32.0327 | 27.9365 | 31.1785 | 31.3696 | 19.0 |
| 0.1677 | 581.0 | 4067 | 0.1086 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 582.0 | 4074 | 0.1085 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 583.0 | 4081 | 0.1084 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 584.0 | 4088 | 0.1081 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 585.0 | 4095 | 0.1079 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 586.0 | 4102 | 0.1077 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 587.0 | 4109 | 0.1077 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 588.0 | 4116 | 0.1075 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 589.0 | 4123 | 0.1075 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 590.0 | 4130 | 0.1076 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 591.0 | 4137 | 0.1074 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 592.0 | 4144 | 0.1072 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 593.0 | 4151 | 0.1068 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 594.0 | 4158 | 0.1065 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 595.0 | 4165 | 0.1063 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 596.0 | 4172 | 0.1063 | 32.0327 | 27.9365 | 31.1785 | 31.3696 | 19.0 |
| 0.1677 | 597.0 | 4179 | 0.1062 | 32.0327 | 27.9365 | 31.1785 | 31.3696 | 19.0 |
| 0.1677 | 598.0 | 4186 | 0.1060 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 599.0 | 4193 | 0.1059 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 600.0 | 4200 | 0.1058 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 601.0 | 4207 | 0.1055 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 602.0 | 4214 | 0.1055 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 603.0 | 4221 | 0.1054 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 604.0 | 4228 | 0.1053 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 605.0 | 4235 | 0.1050 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 606.0 | 4242 | 0.1049 | 32.085 | 27.9511 | 31.1967 | 31.3998 | 19.0 |
| 0.1677 | 607.0 | 4249 | 0.1045 | 32.085 | 27.9511 | 31.1967 | 31.3998 | 19.0 |
| 0.1677 | 608.0 | 4256 | 0.1042 | 32.085 | 27.9511 | 31.1967 | 31.3998 | 19.0 |
| 0.1677 | 609.0 | 4263 | 0.1040 | 32.085 | 27.9511 | 31.1967 | 31.3998 | 19.0 |
| 0.1677 | 610.0 | 4270 | 0.1039 | 32.085 | 27.9511 | 31.1967 | 31.3998 | 19.0 |
| 0.1677 | 611.0 | 4277 | 0.1037 | 32.1776 | 27.9835 | 31.2174 | 31.4851 | 19.0 |
| 0.1677 | 612.0 | 4284 | 0.1035 | 32.1776 | 27.9835 | 31.2174 | 31.4851 | 19.0 |
| 0.1677 | 613.0 | 4291 | 0.1034 | 32.085 | 27.9511 | 31.1967 | 31.3998 | 19.0 |
| 0.1677 | 614.0 | 4298 | 0.1033 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 615.0 | 4305 | 0.1032 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 616.0 | 4312 | 0.1031 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 617.0 | 4319 | 0.1031 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 618.0 | 4326 | 0.1030 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1677 | 619.0 | 4333 | 0.1029 | 31.6882 | 27.5693 | 30.7136 | 31.0183 | 19.0 |
| 0.1677 | 620.0 | 4340 | 0.1028 | 31.6882 | 27.5693 | 30.7136 | 31.0183 | 19.0 |
| 0.1677 | 621.0 | 4347 | 0.1026 | 31.6882 | 27.5693 | 30.7136 | 31.0183 | 19.0 |
| 0.1677 | 622.0 | 4354 | 0.1025 | 31.6882 | 27.5693 | 30.7136 | 31.0183 | 19.0 |
| 0.1677 | 623.0 | 4361 | 0.1024 | 31.6882 | 27.5693 | 30.7136 | 31.0183 | 19.0 |
| 0.1677 | 624.0 | 4368 | 0.1022 | 31.6882 | 27.5693 | 30.7136 | 31.0183 | 19.0 |
| 0.1677 | 625.0 | 4375 | 0.1022 | 31.6882 | 27.5693 | 30.7136 | 31.0183 | 19.0 |
| 0.1677 | 626.0 | 4382 | 0.1021 | 31.6882 | 27.5693 | 30.7136 | 31.0183 | 19.0 |
| 0.1677 | 627.0 | 4389 | 0.1020 | 31.6882 | 27.5693 | 30.7136 | 31.0183 | 19.0 |
| 0.1677 | 628.0 | 4396 | 0.1019 | 31.6985 | 27.6005 | 30.7596 | 31.0373 | 19.0 |
| 0.1677 | 629.0 | 4403 | 0.1018 | 31.6985 | 27.6005 | 30.7596 | 31.0373 | 19.0 |
| 0.1677 | 630.0 | 4410 | 0.1017 | 31.6985 | 27.6005 | 30.7596 | 31.0373 | 19.0 |
| 0.1677 | 631.0 | 4417 | 0.1016 | 31.6985 | 27.6005 | 30.7596 | 31.0373 | 19.0 |
| 0.1677 | 632.0 | 4424 | 0.1014 | 31.6786 | 27.5742 | 30.7404 | 30.9724 | 19.0 |
| 0.1677 | 633.0 | 4431 | 0.1012 | 31.6786 | 27.5742 | 30.7404 | 30.9724 | 19.0 |
| 0.1677 | 634.0 | 4438 | 0.1011 | 31.6786 | 27.5742 | 30.7404 | 30.9724 | 19.0 |
| 0.1677 | 635.0 | 4445 | 0.1010 | 31.6786 | 27.5742 | 30.7404 | 30.9724 | 19.0 |
| 0.1677 | 636.0 | 4452 | 0.1008 | 31.6786 | 27.5742 | 30.7404 | 30.9724 | 19.0 |
| 0.1677 | 637.0 | 4459 | 0.1007 | 31.6786 | 27.5742 | 30.7404 | 30.9724 | 19.0 |
| 0.1677 | 638.0 | 4466 | 0.1006 | 31.6786 | 27.5742 | 30.7404 | 30.9724 | 19.0 |
| 0.1677 | 639.0 | 4473 | 0.1005 | 31.6786 | 27.5742 | 30.7404 | 30.9724 | 19.0 |
| 0.1677 | 640.0 | 4480 | 0.1004 | 32.0412 | 27.913 | 31.1743 | 31.3412 | 19.0 |
| 0.1677 | 641.0 | 4487 | 0.1002 | 32.0582 | 27.9063 | 31.1665 | 31.3564 | 19.0 |
| 0.1677 | 642.0 | 4494 | 0.1002 | 32.0582 | 27.9063 | 31.1665 | 31.3564 | 19.0 |
| 0.1526 | 643.0 | 4501 | 0.1002 | 32.0582 | 27.9063 | 31.1665 | 31.3564 | 19.0 |
| 0.1526 | 644.0 | 4508 | 0.1001 | 32.0412 | 27.9593 | 31.2074 | 31.3412 | 19.0 |
| 0.1526 | 645.0 | 4515 | 0.1001 | 32.0412 | 27.9593 | 31.2074 | 31.3412 | 19.0 |
| 0.1526 | 646.0 | 4522 | 0.1000 | 32.0412 | 27.9593 | 31.2074 | 31.3412 | 19.0 |
| 0.1526 | 647.0 | 4529 | 0.1000 | 31.6616 | 27.6142 | 30.7807 | 30.9488 | 19.0 |
| 0.1526 | 648.0 | 4536 | 0.0999 | 31.6848 | 27.6394 | 30.7922 | 31.0144 | 19.0 |
| 0.1526 | 649.0 | 4543 | 0.0997 | 31.6848 | 27.6394 | 30.7922 | 31.0144 | 19.0 |
| 0.1526 | 650.0 | 4550 | 0.0996 | 31.6757 | 27.5737 | 30.7293 | 30.9968 | 19.0 |
| 0.1526 | 651.0 | 4557 | 0.0995 | 31.6757 | 27.5737 | 30.7293 | 30.9968 | 19.0 |
| 0.1526 | 652.0 | 4564 | 0.0995 | 32.0327 | 27.9365 | 31.1785 | 31.3696 | 19.0 |
| 0.1526 | 653.0 | 4571 | 0.0994 | 32.0327 | 27.9365 | 31.1785 | 31.3696 | 19.0 |
| 0.1526 | 654.0 | 4578 | 0.0993 | 32.0327 | 27.9365 | 31.1785 | 31.3696 | 19.0 |
| 0.1526 | 655.0 | 4585 | 0.0991 | 32.0327 | 27.9365 | 31.1785 | 31.3696 | 19.0 |
| 0.1526 | 656.0 | 4592 | 0.0990 | 32.0327 | 27.9365 | 31.1785 | 31.3696 | 19.0 |
| 0.1526 | 657.0 | 4599 | 0.0988 | 32.0327 | 27.9365 | 31.1785 | 31.3696 | 19.0 |
| 0.1526 | 658.0 | 4606 | 0.0988 | 32.0327 | 27.9633 | 31.2014 | 31.3696 | 19.0 |
| 0.1526 | 659.0 | 4613 | 0.0987 | 32.0327 | 27.9633 | 31.2014 | 31.3696 | 19.0 |
| 0.1526 | 660.0 | 4620 | 0.0987 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1526 | 661.0 | 4627 | 0.0986 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1526 | 662.0 | 4634 | 0.0985 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1526 | 663.0 | 4641 | 0.0984 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1526 | 664.0 | 4648 | 0.0984 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1526 | 665.0 | 4655 | 0.0984 | 32.0506 | 27.932 | 31.1746 | 31.3906 | 19.0 |
| 0.1526 | 666.0 | 4662 | 0.0986 | 32.0327 | 27.9365 | 31.1785 | 31.3696 | 19.0 |
| 0.1526 | 667.0 | 4669 | 0.0987 | 32.0327 | 27.9365 | 31.1785 | 31.3696 | 19.0 |
| 0.1526 | 668.0 | 4676 | 0.0988 | 32.0327 | 27.9365 | 31.1785 | 31.3696 | 19.0 |
| 0.1526 | 669.0 | 4683 | 0.0987 | 32.0327 | 27.9365 | 31.1785 | 31.3696 | 19.0 |
| 0.1526 | 670.0 | 4690 | 0.0985 | 31.6882 | 27.5693 | 30.7136 | 31.0183 | 19.0 |
| 0.1526 | 671.0 | 4697 | 0.0984 | 31.6757 | 27.5737 | 30.7293 | 30.9968 | 19.0 |
| 0.1526 | 672.0 | 4704 | 0.0983 | 31.6757 | 27.5737 | 30.7293 | 30.9968 | 19.0 |
| 0.1526 | 673.0 | 4711 | 0.0983 | 31.6757 | 27.5737 | 30.7293 | 30.9968 | 19.0 |
| 0.1526 | 674.0 | 4718 | 0.0984 | 31.6757 | 27.5737 | 30.7293 | 30.9968 | 19.0 |
| 0.1526 | 675.0 | 4725 | 0.0984 | 31.6757 | 27.5737 | 30.7293 | 30.9968 | 19.0 |
| 0.1526 | 676.0 | 4732 | 0.0984 | 31.6757 | 27.5737 | 30.7293 | 30.9968 | 19.0 |
| 0.1526 | 677.0 | 4739 | 0.0983 | 31.6882 | 27.5693 | 30.7136 | 31.0183 | 19.0 |
| 0.1526 | 678.0 | 4746 | 0.0981 | 31.6882 | 27.5693 | 30.7136 | 31.0183 | 19.0 |
| 0.1526 | 679.0 | 4753 | 0.0981 | 31.6882 | 27.5693 | 30.7136 | 31.0183 | 19.0 |
| 0.1526 | 680.0 | 4760 | 0.0980 | 31.6882 | 27.5693 | 30.7136 | 31.0183 | 19.0 |
| 0.1526 | 681.0 | 4767 | 0.0980 | 31.9544 | 27.9434 | 30.9621 | 31.2208 | 19.0 |
| 0.1526 | 682.0 | 4774 | 0.0977 | 31.9544 | 27.9434 | 30.9621 | 31.2208 | 19.0 |
| 0.1526 | 683.0 | 4781 | 0.0975 | 31.9544 | 27.9434 | 30.9621 | 31.2208 | 19.0 |
| 0.1526 | 684.0 | 4788 | 0.0972 | 31.9544 | 27.9434 | 30.9621 | 31.2208 | 19.0 |
| 0.1526 | 685.0 | 4795 | 0.0972 | 31.9544 | 27.9434 | 30.9621 | 31.2208 | 19.0 |
| 0.1526 | 686.0 | 4802 | 0.0970 | 31.9544 | 27.9434 | 30.9621 | 31.2208 | 19.0 |
| 0.1526 | 687.0 | 4809 | 0.0969 | 31.9544 | 27.9434 | 30.9621 | 31.2208 | 19.0 |
| 0.1526 | 688.0 | 4816 | 0.0967 | 31.9544 | 27.9434 | 30.9621 | 31.2208 | 19.0 |
| 0.1526 | 689.0 | 4823 | 0.0966 | 31.9544 | 27.9434 | 30.9621 | 31.2208 | 19.0 |
| 0.1526 | 690.0 | 4830 | 0.0965 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1526 | 691.0 | 4837 | 0.0964 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1526 | 692.0 | 4844 | 0.0964 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1526 | 693.0 | 4851 | 0.0962 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1526 | 694.0 | 4858 | 0.0960 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1526 | 695.0 | 4865 | 0.0960 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1526 | 696.0 | 4872 | 0.0959 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1526 | 697.0 | 4879 | 0.0959 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1526 | 698.0 | 4886 | 0.0958 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1526 | 699.0 | 4893 | 0.0957 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1526 | 700.0 | 4900 | 0.0957 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1526 | 701.0 | 4907 | 0.0956 | 32.3596 | 28.3361 | 31.4021 | 31.5663 | 19.0 |
| 0.1526 | 702.0 | 4914 | 0.0956 | 32.3596 | 28.3361 | 31.4021 | 31.5663 | 19.0 |
| 0.1526 | 703.0 | 4921 | 0.0956 | 31.9544 | 27.9434 | 30.9621 | 31.2208 | 19.0 |
| 0.1526 | 704.0 | 4928 | 0.0956 | 31.9544 | 27.9434 | 30.9621 | 31.2208 | 19.0 |
| 0.1526 | 705.0 | 4935 | 0.0955 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1526 | 706.0 | 4942 | 0.0954 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1526 | 707.0 | 4949 | 0.0953 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1526 | 708.0 | 4956 | 0.0952 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1526 | 709.0 | 4963 | 0.0950 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1526 | 710.0 | 4970 | 0.0948 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1526 | 711.0 | 4977 | 0.0949 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1526 | 712.0 | 4984 | 0.0948 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1526 | 713.0 | 4991 | 0.0948 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1526 | 714.0 | 4998 | 0.0947 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 715.0 | 5005 | 0.0946 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 716.0 | 5012 | 0.0946 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 717.0 | 5019 | 0.0947 | 32.3596 | 28.3361 | 31.4021 | 31.5663 | 19.0 |
| 0.1404 | 718.0 | 5026 | 0.0946 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 719.0 | 5033 | 0.0946 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 720.0 | 5040 | 0.0946 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 721.0 | 5047 | 0.0946 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 722.0 | 5054 | 0.0946 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 723.0 | 5061 | 0.0946 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 724.0 | 5068 | 0.0945 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 725.0 | 5075 | 0.0944 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 726.0 | 5082 | 0.0943 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 727.0 | 5089 | 0.0941 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 728.0 | 5096 | 0.0940 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 729.0 | 5103 | 0.0940 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1404 | 730.0 | 5110 | 0.0940 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1404 | 731.0 | 5117 | 0.0939 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1404 | 732.0 | 5124 | 0.0938 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1404 | 733.0 | 5131 | 0.0938 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1404 | 734.0 | 5138 | 0.0937 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1404 | 735.0 | 5145 | 0.0936 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1404 | 736.0 | 5152 | 0.0936 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1404 | 737.0 | 5159 | 0.0935 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1404 | 738.0 | 5166 | 0.0934 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1404 | 739.0 | 5173 | 0.0934 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1404 | 740.0 | 5180 | 0.0934 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1404 | 741.0 | 5187 | 0.0934 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1404 | 742.0 | 5194 | 0.0934 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1404 | 743.0 | 5201 | 0.0933 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 744.0 | 5208 | 0.0933 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 745.0 | 5215 | 0.0932 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 746.0 | 5222 | 0.0931 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 747.0 | 5229 | 0.0930 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 748.0 | 5236 | 0.0929 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 749.0 | 5243 | 0.0928 | 32.3934 | 28.3265 | 31.3864 | 31.594 | 19.0 |
| 0.1404 | 750.0 | 5250 | 0.0927 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1404 | 751.0 | 5257 | 0.0926 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1404 | 752.0 | 5264 | 0.0925 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1404 | 753.0 | 5271 | 0.0925 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1404 | 754.0 | 5278 | 0.0924 | 31.9801 | 27.9368 | 30.9543 | 31.2445 | 19.0 |
| 0.1404 | 755.0 | 5285 | 0.0923 | 32.0859 | 28.1042 | 31.089 | 31.3416 | 19.0 |
| 0.1404 | 756.0 | 5292 | 0.0923 | 32.0535 | 28.1711 | 31.143 | 31.3184 | 19.0 |
| 0.1404 | 757.0 | 5299 | 0.0922 | 32.0535 | 28.1711 | 31.143 | 31.3184 | 19.0 |
| 0.1404 | 758.0 | 5306 | 0.0922 | 32.4741 | 28.5522 | 31.5417 | 31.6689 | 19.0 |
| 0.1404 | 759.0 | 5313 | 0.0921 | 32.4741 | 28.5522 | 31.5417 | 31.6689 | 19.0 |
| 0.1404 | 760.0 | 5320 | 0.0921 | 32.4141 | 28.4885 | 31.4995 | 31.6445 | 19.0 |
| 0.1404 | 761.0 | 5327 | 0.0920 | 32.4141 | 28.4885 | 31.4995 | 31.6445 | 19.0 |
| 0.1404 | 762.0 | 5334 | 0.0919 | 32.4141 | 28.4885 | 31.4995 | 31.6445 | 19.0 |
| 0.1404 | 763.0 | 5341 | 0.0918 | 32.4141 | 28.4707 | 31.476 | 31.6445 | 19.0 |
| 0.1404 | 764.0 | 5348 | 0.0918 | 32.4141 | 28.4707 | 31.476 | 31.6445 | 19.0 |
| 0.1404 | 765.0 | 5355 | 0.0917 | 32.4741 | 28.5078 | 31.5177 | 31.6689 | 19.0 |
| 0.1404 | 766.0 | 5362 | 0.0917 | 32.4741 | 28.5078 | 31.5177 | 31.6689 | 19.0 |
| 0.1404 | 767.0 | 5369 | 0.0916 | 32.4741 | 28.5078 | 31.5177 | 31.6689 | 19.0 |
| 0.1404 | 768.0 | 5376 | 0.0916 | 32.4741 | 28.5078 | 31.5177 | 31.6689 | 19.0 |
| 0.1404 | 769.0 | 5383 | 0.0916 | 32.4741 | 28.5078 | 31.5177 | 31.6689 | 19.0 |
| 0.1404 | 770.0 | 5390 | 0.0916 | 32.4741 | 28.5078 | 31.5177 | 31.6689 | 19.0 |
| 0.1404 | 771.0 | 5397 | 0.0914 | 32.0535 | 28.1042 | 31.1008 | 31.3184 | 19.0 |
| 0.1404 | 772.0 | 5404 | 0.0914 | 32.0535 | 28.1042 | 31.1008 | 31.3184 | 19.0 |
| 0.1404 | 773.0 | 5411 | 0.0913 | 32.0535 | 28.1711 | 31.143 | 31.3184 | 19.0 |
| 0.1404 | 774.0 | 5418 | 0.0911 | 32.0185 | 28.0995 | 31.1163 | 31.2507 | 19.0 |
| 0.1404 | 775.0 | 5425 | 0.0909 | 32.0185 | 28.0995 | 31.1163 | 31.2507 | 19.0 |
| 0.1404 | 776.0 | 5432 | 0.0908 | 32.0185 | 28.0995 | 31.1163 | 31.2507 | 19.0 |
| 0.1404 | 777.0 | 5439 | 0.0907 | 32.0185 | 28.0995 | 31.1163 | 31.2507 | 19.0 |
| 0.1404 | 778.0 | 5446 | 0.0908 | 32.0185 | 28.0995 | 31.1163 | 31.2507 | 19.0 |
| 0.1404 | 779.0 | 5453 | 0.0908 | 32.0185 | 28.0995 | 31.1163 | 31.2507 | 19.0 |
| 0.1404 | 780.0 | 5460 | 0.0907 | 32.0185 | 28.0995 | 31.1163 | 31.2507 | 19.0 |
| 0.1404 | 781.0 | 5467 | 0.0906 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1404 | 782.0 | 5474 | 0.0906 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1404 | 783.0 | 5481 | 0.0906 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1404 | 784.0 | 5488 | 0.0905 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1404 | 785.0 | 5495 | 0.0904 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 786.0 | 5502 | 0.0904 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1324 | 787.0 | 5509 | 0.0904 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 788.0 | 5516 | 0.0904 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 789.0 | 5523 | 0.0904 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 790.0 | 5530 | 0.0904 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 791.0 | 5537 | 0.0903 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1324 | 792.0 | 5544 | 0.0903 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1324 | 793.0 | 5551 | 0.0902 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1324 | 794.0 | 5558 | 0.0902 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1324 | 795.0 | 5565 | 0.0903 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1324 | 796.0 | 5572 | 0.0903 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1324 | 797.0 | 5579 | 0.0903 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1324 | 798.0 | 5586 | 0.0902 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1324 | 799.0 | 5593 | 0.0901 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1324 | 800.0 | 5600 | 0.0900 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1324 | 801.0 | 5607 | 0.0900 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1324 | 802.0 | 5614 | 0.0899 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1324 | 803.0 | 5621 | 0.0898 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1324 | 804.0 | 5628 | 0.0899 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1324 | 805.0 | 5635 | 0.0899 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1324 | 806.0 | 5642 | 0.0897 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1324 | 807.0 | 5649 | 0.0897 | 32.2284 | 28.5058 | 31.4656 | 31.5557 | 19.0 |
| 0.1324 | 808.0 | 5656 | 0.0897 | 32.2284 | 28.5058 | 31.4656 | 31.5557 | 19.0 |
| 0.1324 | 809.0 | 5663 | 0.0897 | 32.2284 | 28.5058 | 31.4656 | 31.5557 | 19.0 |
| 0.1324 | 810.0 | 5670 | 0.0897 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 811.0 | 5677 | 0.0897 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 812.0 | 5684 | 0.0897 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 813.0 | 5691 | 0.0897 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 814.0 | 5698 | 0.0897 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 815.0 | 5705 | 0.0897 | 32.1928 | 28.436 | 31.3924 | 31.4973 | 19.0 |
| 0.1324 | 816.0 | 5712 | 0.0897 | 32.1928 | 28.436 | 31.3924 | 31.4973 | 19.0 |
| 0.1324 | 817.0 | 5719 | 0.0897 | 32.2825 | 28.5853 | 31.5326 | 31.5924 | 19.0 |
| 0.1324 | 818.0 | 5726 | 0.0897 | 32.1928 | 28.436 | 31.3924 | 31.4973 | 19.0 |
| 0.1324 | 819.0 | 5733 | 0.0897 | 32.2825 | 28.5853 | 31.5326 | 31.5924 | 19.0 |
| 0.1324 | 820.0 | 5740 | 0.0897 | 32.1928 | 28.436 | 31.3924 | 31.4973 | 19.0 |
| 0.1324 | 821.0 | 5747 | 0.0897 | 32.1928 | 28.436 | 31.3924 | 31.4973 | 19.0 |
| 0.1324 | 822.0 | 5754 | 0.0896 | 32.1928 | 28.436 | 31.3924 | 31.4973 | 19.0 |
| 0.1324 | 823.0 | 5761 | 0.0895 | 32.1928 | 28.436 | 31.3924 | 31.4973 | 19.0 |
| 0.1324 | 824.0 | 5768 | 0.0895 | 32.2825 | 28.5853 | 31.5326 | 31.5924 | 19.0 |
| 0.1324 | 825.0 | 5775 | 0.0894 | 32.2825 | 28.5853 | 31.5326 | 31.5924 | 19.0 |
| 0.1324 | 826.0 | 5782 | 0.0893 | 32.2825 | 28.5853 | 31.5326 | 31.5924 | 19.0 |
| 0.1324 | 827.0 | 5789 | 0.0892 | 32.2825 | 28.5853 | 31.5326 | 31.5924 | 19.0 |
| 0.1324 | 828.0 | 5796 | 0.0890 | 32.2825 | 28.5853 | 31.5326 | 31.5924 | 19.0 |
| 0.1324 | 829.0 | 5803 | 0.0889 | 32.2825 | 28.5853 | 31.5326 | 31.5924 | 19.0 |
| 0.1324 | 830.0 | 5810 | 0.0888 | 32.2825 | 28.5853 | 31.5326 | 31.5924 | 19.0 |
| 0.1324 | 831.0 | 5817 | 0.0887 | 32.2825 | 28.5853 | 31.5326 | 31.5924 | 19.0 |
| 0.1324 | 832.0 | 5824 | 0.0887 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1324 | 833.0 | 5831 | 0.0886 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1324 | 834.0 | 5838 | 0.0886 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 835.0 | 5845 | 0.0886 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 836.0 | 5852 | 0.0885 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 837.0 | 5859 | 0.0885 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 838.0 | 5866 | 0.0885 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 839.0 | 5873 | 0.0884 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 840.0 | 5880 | 0.0883 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 841.0 | 5887 | 0.0883 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 842.0 | 5894 | 0.0883 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 843.0 | 5901 | 0.0883 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 844.0 | 5908 | 0.0883 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 845.0 | 5915 | 0.0883 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 846.0 | 5922 | 0.0882 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 847.0 | 5929 | 0.0881 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 848.0 | 5936 | 0.0881 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 849.0 | 5943 | 0.0880 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 850.0 | 5950 | 0.0880 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 851.0 | 5957 | 0.0880 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 852.0 | 5964 | 0.0880 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 853.0 | 5971 | 0.0879 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 854.0 | 5978 | 0.0879 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 855.0 | 5985 | 0.0878 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 856.0 | 5992 | 0.0878 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.1324 | 857.0 | 5999 | 0.0877 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.126 | 858.0 | 6006 | 0.0877 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.126 | 859.0 | 6013 | 0.0877 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.126 | 860.0 | 6020 | 0.0877 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.126 | 861.0 | 6027 | 0.0876 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 862.0 | 6034 | 0.0876 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 863.0 | 6041 | 0.0875 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 864.0 | 6048 | 0.0875 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 865.0 | 6055 | 0.0874 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 866.0 | 6062 | 0.0874 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 867.0 | 6069 | 0.0873 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 868.0 | 6076 | 0.0872 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 869.0 | 6083 | 0.0871 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 870.0 | 6090 | 0.0871 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 871.0 | 6097 | 0.0870 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 872.0 | 6104 | 0.0869 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 873.0 | 6111 | 0.0869 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 874.0 | 6118 | 0.0869 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 875.0 | 6125 | 0.0868 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 876.0 | 6132 | 0.0868 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 877.0 | 6139 | 0.0868 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 878.0 | 6146 | 0.0868 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 879.0 | 6153 | 0.0867 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 880.0 | 6160 | 0.0867 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 881.0 | 6167 | 0.0867 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 882.0 | 6174 | 0.0867 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 883.0 | 6181 | 0.0867 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 884.0 | 6188 | 0.0867 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 885.0 | 6195 | 0.0866 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 886.0 | 6202 | 0.0866 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 887.0 | 6209 | 0.0866 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 888.0 | 6216 | 0.0865 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 889.0 | 6223 | 0.0866 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 890.0 | 6230 | 0.0866 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 891.0 | 6237 | 0.0865 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 892.0 | 6244 | 0.0866 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 893.0 | 6251 | 0.0866 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 894.0 | 6258 | 0.0866 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 895.0 | 6265 | 0.0866 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 896.0 | 6272 | 0.0865 | 32.5814 | 28.9613 | 31.8465 | 31.9321 | 19.0 |
| 0.126 | 897.0 | 6279 | 0.0865 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 898.0 | 6286 | 0.0865 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 899.0 | 6293 | 0.0865 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 900.0 | 6300 | 0.0865 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 901.0 | 6307 | 0.0865 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 902.0 | 6314 | 0.0864 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 903.0 | 6321 | 0.0864 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 904.0 | 6328 | 0.0864 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 905.0 | 6335 | 0.0864 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 906.0 | 6342 | 0.0864 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 907.0 | 6349 | 0.0864 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 908.0 | 6356 | 0.0864 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 909.0 | 6363 | 0.0864 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 910.0 | 6370 | 0.0864 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 911.0 | 6377 | 0.0863 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 912.0 | 6384 | 0.0863 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 913.0 | 6391 | 0.0862 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 914.0 | 6398 | 0.0862 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 915.0 | 6405 | 0.0862 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 916.0 | 6412 | 0.0862 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 917.0 | 6419 | 0.0861 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 918.0 | 6426 | 0.0861 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 919.0 | 6433 | 0.0861 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 920.0 | 6440 | 0.0861 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 921.0 | 6447 | 0.0861 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 922.0 | 6454 | 0.0860 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 923.0 | 6461 | 0.0860 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 924.0 | 6468 | 0.0860 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 925.0 | 6475 | 0.0860 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 926.0 | 6482 | 0.0860 | 32.4879 | 28.7819 | 31.7054 | 31.836 | 19.0 |
| 0.126 | 927.0 | 6489 | 0.0860 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.126 | 928.0 | 6496 | 0.0860 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.123 | 929.0 | 6503 | 0.0860 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.123 | 930.0 | 6510 | 0.0860 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.123 | 931.0 | 6517 | 0.0861 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.123 | 932.0 | 6524 | 0.0860 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.123 | 933.0 | 6531 | 0.0860 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.123 | 934.0 | 6538 | 0.0861 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.123 | 935.0 | 6545 | 0.0860 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.123 | 936.0 | 6552 | 0.0861 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.123 | 937.0 | 6559 | 0.0861 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.123 | 938.0 | 6566 | 0.0861 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.123 | 939.0 | 6573 | 0.0860 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.123 | 940.0 | 6580 | 0.0860 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.123 | 941.0 | 6587 | 0.0860 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.123 | 942.0 | 6594 | 0.0860 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.123 | 943.0 | 6601 | 0.0860 | 32.1384 | 28.4015 | 31.3493 | 31.4457 | 19.0 |
| 0.123 | 944.0 | 6608 | 0.0860 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 945.0 | 6615 | 0.0860 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.123 | 946.0 | 6622 | 0.0859 | 32.5232 | 28.7806 | 31.7308 | 31.7834 | 19.0 |
| 0.123 | 947.0 | 6629 | 0.0859 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 948.0 | 6636 | 0.0859 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 949.0 | 6643 | 0.0859 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 950.0 | 6650 | 0.0859 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 951.0 | 6657 | 0.0859 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 952.0 | 6664 | 0.0859 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 953.0 | 6671 | 0.0859 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 954.0 | 6678 | 0.0858 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 955.0 | 6685 | 0.0858 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 956.0 | 6692 | 0.0858 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 957.0 | 6699 | 0.0858 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 958.0 | 6706 | 0.0858 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 959.0 | 6713 | 0.0858 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 960.0 | 6720 | 0.0858 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 961.0 | 6727 | 0.0858 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 962.0 | 6734 | 0.0858 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 963.0 | 6741 | 0.0858 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 964.0 | 6748 | 0.0858 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 965.0 | 6755 | 0.0858 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 966.0 | 6762 | 0.0858 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 967.0 | 6769 | 0.0858 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 968.0 | 6776 | 0.0858 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 969.0 | 6783 | 0.0858 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 970.0 | 6790 | 0.0857 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 971.0 | 6797 | 0.0857 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 972.0 | 6804 | 0.0858 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 973.0 | 6811 | 0.0858 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 974.0 | 6818 | 0.0857 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 975.0 | 6825 | 0.0857 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 976.0 | 6832 | 0.0857 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 977.0 | 6839 | 0.0857 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 978.0 | 6846 | 0.0857 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 979.0 | 6853 | 0.0857 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 980.0 | 6860 | 0.0857 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 981.0 | 6867 | 0.0857 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 982.0 | 6874 | 0.0857 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 983.0 | 6881 | 0.0857 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 984.0 | 6888 | 0.0857 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 985.0 | 6895 | 0.0857 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 986.0 | 6902 | 0.0857 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 987.0 | 6909 | 0.0857 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 988.0 | 6916 | 0.0857 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 989.0 | 6923 | 0.0857 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 990.0 | 6930 | 0.0856 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 991.0 | 6937 | 0.0857 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 992.0 | 6944 | 0.0857 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 993.0 | 6951 | 0.0856 | 32.6133 | 28.96 | 31.8684 | 31.8875 | 19.0 |
| 0.123 | 994.0 | 6958 | 0.0856 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 995.0 | 6965 | 0.0856 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 996.0 | 6972 | 0.0856 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 997.0 | 6979 | 0.0856 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 998.0 | 6986 | 0.0856 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.123 | 999.0 | 6993 | 0.0856 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
| 0.1213 | 1000.0 | 7000 | 0.0856 | 32.2284 | 28.534 | 31.5055 | 31.5557 | 19.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
manirai91/enlm-roberta-imdb
|
manirai91
| 2022-11-22T20:43:14Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-22T16:57:28Z |
---
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: enlmr-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# enlmr-imdb
This model is a fine-tuned version of [manirai91/enlm-r-final](https://huggingface.co/manirai91/enlm-r-final) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
manirai91/xlm-roberta-imdb
|
manirai91
| 2022-11-22T20:36:34Z | 126 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-22T16:42:44Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: xlm-roberta-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-imdb
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2
|
research-backup
| 2022-11-22T20:25:41Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:40:00Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.790515873015873
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.37967914438502676
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3857566765578635
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5063924402445803
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.646
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4517543859649123
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.42824074074074076
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8080458038270304
- name: F1 (macro)
type: f1_macro
value: 0.7357565896819839
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7894366197183098
- name: F1 (macro)
type: f1_macro
value: 0.4680529848631216
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.5520043336944745
- name: F1 (macro)
type: f1_macro
value: 0.5647005456999193
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9177157960631565
- name: F1 (macro)
type: f1_macro
value: 0.7991809595622609
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.770918207458477
- name: F1 (macro)
type: f1_macro
value: 0.701131895018139
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.37967914438502676
- Accuracy on SAT: 0.3857566765578635
- Accuracy on BATS: 0.5063924402445803
- Accuracy on U2: 0.4517543859649123
- Accuracy on U4: 0.42824074074074076
- Accuracy on Google: 0.646
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8080458038270304
- Micro F1 score on CogALexV: 0.7894366197183098
- Micro F1 score on EVALution: 0.5520043336944745
- Micro F1 score on K&H+N: 0.9177157960631565
- Micro F1 score on ROOT09: 0.770918207458477
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.790515873015873
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 10
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
monakth/bert-base-multilingual-cased-sv2
|
monakth
| 2022-11-22T19:51:49Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-22T19:49:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: bert-base-multilingual-cased-ssv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-ssv
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2
|
research-backup
| 2022-11-22T19:43:03Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:34:42Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.7143253968253969
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.30213903743315507
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.29673590504451036
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.41078376876042244
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.444
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3508771929824561
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.35185185185185186
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8389332529757421
- name: F1 (macro)
type: f1_macro
value: 0.8320870274406121
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8110328638497653
- name: F1 (macro)
type: f1_macro
value: 0.558175722976752
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6397616468039004
- name: F1 (macro)
type: f1_macro
value: 0.6018197960350038
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.936495791889824
- name: F1 (macro)
type: f1_macro
value: 0.8329891004271437
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8574114697586963
- name: F1 (macro)
type: f1_macro
value: 0.859031346414651
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.30213903743315507
- Accuracy on SAT: 0.29673590504451036
- Accuracy on BATS: 0.41078376876042244
- Accuracy on U2: 0.3508771929824561
- Accuracy on U4: 0.35185185185185186
- Accuracy on Google: 0.444
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8389332529757421
- Micro F1 score on CogALexV: 0.8110328638497653
- Micro F1 score on EVALution: 0.6397616468039004
- Micro F1 score on K&H+N: 0.936495791889824
- Micro F1 score on ROOT09: 0.8574114697586963
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7143253968253969
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 10
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
alryan1478/gpt2-wikitext2
|
alryan1478
| 2022-11-22T19:15:47Z | 175 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-22T16:54:38Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.561 | 1.0 | 2249 | 6.4685 |
| 6.1921 | 2.0 | 4498 | 6.1978 |
| 6.017 | 3.0 | 6747 | 6.1085 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
masapasa/meddner
|
masapasa
| 2022-11-22T19:13:06Z | 3 | 0 |
spacy
|
[
"spacy",
"token-classification",
"en",
"license:mit",
"model-index",
"region:us"
] |
token-classification
| 2022-11-22T19:05:40Z |
---
tags:
- spacy
- token-classification
language:
- en
license: mit
model-index:
- name: en_core_med7_lg
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8649613325
- name: NER Recall
type: recall
value: 0.8892966361
- name: NER F Score
type: f_score
value: 0.876960193
duplicated_from: kormilitzin/en_core_med7_lg
---
| Feature | Description |
| --- | --- |
| **Name** | `en_core_med7_lg` |
| **Version** | `3.4.2.1` |
| **spaCy** | `>=3.4.2,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | `MIT` |
| **Author** | [Andrey Kormilitzin](https://www.kormilitzin.com/) |
### Label Scheme
<details>
<summary>View label scheme (7 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `DOSAGE`, `DRUG`, `DURATION`, `FORM`, `FREQUENCY`, `ROUTE`, `STRENGTH` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 87.70 |
| `ENTS_P` | 86.50 |
| `ENTS_R` | 88.93 |
| `TOK2VEC_LOSS` | 226109.53 |
| `NER_LOSS` | 302222.55 |
### BibTeX entry and citation info
```bibtex
@article{kormilitzin2021med7,
title={Med7: A transferable clinical natural language processing model for electronic health records},
author={Kormilitzin, Andrey and Vaci, Nemanja and Liu, Qiang and Nevado-Holgado, Alejo},
journal={Artificial Intelligence in Medicine},
volume={118},
pages={102086},
year={2021},
publisher={Elsevier}
}
```
|
HarshitaDiddee/AmericasNLP_Bribri
|
HarshitaDiddee
| 2022-11-22T18:35:11Z | 91 | 0 |
transformers
|
[
"transformers",
"wav2vec2",
"automatic-speech-recognition",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-22T18:24:40Z |
---
license: cc-by-4.0
---
ASR Model for Bribri ( Source: AmericasNLP Shared Task 2022 )
|
umairalipathan/finetuning-sentiment-model-surrender-final
|
umairalipathan
| 2022-11-22T18:17:49Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-22T18:08:12Z |
---
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-surrender-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-surrender-final
This model is a fine-tuned version of [umairalipathan/autotrain-sisu_surrender-2206370778](https://huggingface.co/umairalipathan/autotrain-sisu_surrender-2206370778) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2072
- eval_accuracy: 0.9556
- eval_f1: 0.9714
- eval_runtime: 8.4
- eval_samples_per_second: 5.357
- eval_steps_per_second: 0.357
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cpu
- Datasets 2.6.1
- Tokenizers 0.13.2
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-2
|
research-backup
| 2022-11-22T17:54:16Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:24:50Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.7584126984126984
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.32887700534759357
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3353115727002967
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.39466370205669815
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.504
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.39035087719298245
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.38425925925925924
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8323037516950429
- name: F1 (macro)
type: f1_macro
value: 0.8135716497645339
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7492957746478873
- name: F1 (macro)
type: f1_macro
value: 0.28766475530328117
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.5861321776814734
- name: F1 (macro)
type: f1_macro
value: 0.545958272767557
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.903109132642415
- name: F1 (macro)
type: f1_macro
value: 0.7624740127692404
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8429959260419931
- name: F1 (macro)
type: f1_macro
value: 0.8383818257665551
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.32887700534759357
- Accuracy on SAT: 0.3353115727002967
- Accuracy on BATS: 0.39466370205669815
- Accuracy on U2: 0.39035087719298245
- Accuracy on U4: 0.38425925925925924
- Accuracy on Google: 0.504
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8323037516950429
- Micro F1 score on CogALexV: 0.7492957746478873
- Micro F1 score on EVALution: 0.5861321776814734
- Micro F1 score on K&H+N: 0.903109132642415
- Micro F1 score on ROOT09: 0.8429959260419931
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7584126984126984
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
gd1m3y/test_trainer_1
|
gd1m3y
| 2022-11-22T17:38:49Z | 178 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-22T17:04:11Z |
<<<<<<< HEAD
---
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
model-index:
- name: test_trainer_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer_1
This model is a fine-tuned version of [SALT-NLP/FLANG-Roberta](https://huggingface.co/SALT-NLP/FLANG-Roberta) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5963
- eval_accuracy: 0.9242
- eval_runtime: 4.3354
- eval_samples_per_second: 97.337
- eval_steps_per_second: 12.225
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
=======
This is a demo model for our reference
>>>>>>> 24191373ff05e3799b9c6f359e51b37b642f4865
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1
|
research-backup
| 2022-11-22T17:34:18Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:40:04Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8018650793650793
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3502673796791444
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.35014836795252224
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5202890494719289
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.644
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.39035087719298245
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.43287037037037035
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8461654361910502
- name: F1 (macro)
type: f1_macro
value: 0.8411664963735426
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8145539906103286
- name: F1 (macro)
type: f1_macro
value: 0.5873414064116238
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6505958829902492
- name: F1 (macro)
type: f1_macro
value: 0.6269958308732405
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9319051262433052
- name: F1 (macro)
type: f1_macro
value: 0.8393686548194149
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7511751801942964
- name: F1 (macro)
type: f1_macro
value: 0.6464435364634403
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.3502673796791444
- Accuracy on SAT: 0.35014836795252224
- Accuracy on BATS: 0.5202890494719289
- Accuracy on U2: 0.39035087719298245
- Accuracy on U4: 0.43287037037037035
- Accuracy on Google: 0.644
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8461654361910502
- Micro F1 score on CogALexV: 0.8145539906103286
- Micro F1 score on EVALution: 0.6505958829902492
- Micro F1 score on K&H+N: 0.9319051262433052
- Micro F1 score on ROOT09: 0.7511751801942964
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8018650793650793
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 1
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2
|
research-backup
| 2022-11-22T17:33:29Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:22:15Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.7463293650793651
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.34759358288770054
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3590504451038576
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.481378543635353
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.494
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3991228070175439
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.35648148148148145
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8610818140726232
- name: F1 (macro)
type: f1_macro
value: 0.8525458448699613
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8171361502347417
- name: F1 (macro)
type: f1_macro
value: 0.5610856949320919
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6229685807150596
- name: F1 (macro)
type: f1_macro
value: 0.6126645128177534
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9215413507685887
- name: F1 (macro)
type: f1_macro
value: 0.8042276096823726
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.857724851143842
- name: F1 (macro)
type: f1_macro
value: 0.8472661094927697
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.34759358288770054
- Accuracy on SAT: 0.3590504451038576
- Accuracy on BATS: 0.481378543635353
- Accuracy on U2: 0.3991228070175439
- Accuracy on U4: 0.35648148148148145
- Accuracy on Google: 0.494
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8610818140726232
- Micro F1 score on CogALexV: 0.8171361502347417
- Micro F1 score on EVALution: 0.6229685807150596
- Micro F1 score on K&H+N: 0.9215413507685887
- Micro F1 score on ROOT09: 0.857724851143842
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7463293650793651
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-1
|
research-backup
| 2022-11-22T17:26:35Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:36:45Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-1
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.7624206349206349
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3770053475935829
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3768545994065282
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.44580322401334077
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.57
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.39473684210526316
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.37962962962962965
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8797649540455025
- name: F1 (macro)
type: f1_macro
value: 0.8747086885506318
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7992957746478874
- name: F1 (macro)
type: f1_macro
value: 0.5104712427778083
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6397616468039004
- name: F1 (macro)
type: f1_macro
value: 0.6084431389476428
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9367044585101204
- name: F1 (macro)
type: f1_macro
value: 0.8301423655430062
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8677530554685051
- name: F1 (macro)
type: f1_macro
value: 0.8691031015559968
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-1
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-1/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.3770053475935829
- Accuracy on SAT: 0.3768545994065282
- Accuracy on BATS: 0.44580322401334077
- Accuracy on U2: 0.39473684210526316
- Accuracy on U4: 0.37962962962962965
- Accuracy on Google: 0.57
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-1/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8797649540455025
- Micro F1 score on CogALexV: 0.7992957746478874
- Micro F1 score on EVALution: 0.6397616468039004
- Micro F1 score on K&H+N: 0.9367044585101204
- Micro F1 score on ROOT09: 0.8677530554685051
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-1/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7624206349206349
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-1")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 1
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-1/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
dung1308/dung_NT_model_save
|
dung1308
| 2022-11-22T17:22:09Z | 65 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-22T01:33:27Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: dung1308/dung_NT_model_save
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dung1308/dung_NT_model_save
This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.8144
- Validation Loss: 3.6030
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.4431 | 3.9985 | 0 |
| 3.9986 | 3.8016 | 1 |
| 3.8144 | 3.6030 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.7.0
- Tokenizers 0.11.0
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-1
|
research-backup
| 2022-11-22T17:19:46Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:32:32Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-1
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.775079365079365
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3716577540106952
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3768545994065282
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.34185658699277377
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.428
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.37719298245614036
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3541666666666667
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.899201446436643
- name: F1 (macro)
type: f1_macro
value: 0.888889751667277
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7814553990610328
- name: F1 (macro)
type: f1_macro
value: 0.5516320672010655
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6408450704225352
- name: F1 (macro)
type: f1_macro
value: 0.6082440999373899
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9525631216526397
- name: F1 (macro)
type: f1_macro
value: 0.862670256588896
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.840802256345973
- name: F1 (macro)
type: f1_macro
value: 0.8106179148472547
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-1
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-1/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.3716577540106952
- Accuracy on SAT: 0.3768545994065282
- Accuracy on BATS: 0.34185658699277377
- Accuracy on U2: 0.37719298245614036
- Accuracy on U4: 0.3541666666666667
- Accuracy on Google: 0.428
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-1/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.899201446436643
- Micro F1 score on CogALexV: 0.7814553990610328
- Micro F1 score on EVALution: 0.6408450704225352
- Micro F1 score on K&H+N: 0.9525631216526397
- Micro F1 score on ROOT09: 0.840802256345973
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-1/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.775079365079365
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-1")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 1
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-1/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1
|
research-backup
| 2022-11-22T17:13:57Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:30:48Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.7387698412698412
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3342245989304813
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.34718100890207715
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5441912173429683
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.644
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.35526315789473684
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.37962962962962965
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8145246346240772
- name: F1 (macro)
type: f1_macro
value: 0.801802054210856
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7774647887323943
- name: F1 (macro)
type: f1_macro
value: 0.5026184700694826
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.5980498374864572
- name: F1 (macro)
type: f1_macro
value: 0.5765100456864519
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8878069138206858
- name: F1 (macro)
type: f1_macro
value: 0.7711282513838499
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.827326856784707
- name: F1 (macro)
type: f1_macro
value: 0.824410778730745
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.3342245989304813
- Accuracy on SAT: 0.34718100890207715
- Accuracy on BATS: 0.5441912173429683
- Accuracy on U2: 0.35526315789473684
- Accuracy on U4: 0.37962962962962965
- Accuracy on Google: 0.644
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8145246346240772
- Micro F1 score on CogALexV: 0.7774647887323943
- Micro F1 score on EVALution: 0.5980498374864572
- Micro F1 score on K&H+N: 0.8878069138206858
- Micro F1 score on ROOT09: 0.827326856784707
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7387698412698412
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 1
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1
|
research-backup
| 2022-11-22T17:06:31Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:26:58Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.7
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.35561497326203206
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.34718100890207715
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.48526959421901056
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.618
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.39473684210526316
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3541666666666667
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8442067199035708
- name: F1 (macro)
type: f1_macro
value: 0.823901479879959
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8110328638497653
- name: F1 (macro)
type: f1_macro
value: 0.5472550813103398
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.5769230769230769
- name: F1 (macro)
type: f1_macro
value: 0.5466975926628965
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9118035751547611
- name: F1 (macro)
type: f1_macro
value: 0.7693980437177949
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8564713256032591
- name: F1 (macro)
type: f1_macro
value: 0.851273747817193
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.35561497326203206
- Accuracy on SAT: 0.34718100890207715
- Accuracy on BATS: 0.48526959421901056
- Accuracy on U2: 0.39473684210526316
- Accuracy on U4: 0.3541666666666667
- Accuracy on Google: 0.618
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8442067199035708
- Micro F1 score on CogALexV: 0.8110328638497653
- Micro F1 score on EVALution: 0.5769230769230769
- Micro F1 score on K&H+N: 0.9118035751547611
- Micro F1 score on ROOT09: 0.8564713256032591
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 5
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 1
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1
|
research-backup
| 2022-11-22T17:00:21Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:22:15Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8430952380952381
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3582887700534759
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3649851632047478
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4280155642023346
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.532
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3333333333333333
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3101851851851852
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8460147657073979
- name: F1 (macro)
type: f1_macro
value: 0.8315897128108677
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8084507042253521
- name: F1 (macro)
type: f1_macro
value: 0.5269777075808457
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6424702058504875
- name: F1 (macro)
type: f1_macro
value: 0.6178608994596904
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.913612019197329
- name: F1 (macro)
type: f1_macro
value: 0.7738790468743169
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8693199623942337
- name: F1 (macro)
type: f1_macro
value: 0.864532922094076
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.3582887700534759
- Accuracy on SAT: 0.3649851632047478
- Accuracy on BATS: 0.4280155642023346
- Accuracy on U2: 0.3333333333333333
- Accuracy on U4: 0.3101851851851852
- Accuracy on Google: 0.532
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8460147657073979
- Micro F1 score on CogALexV: 0.8084507042253521
- Micro F1 score on EVALution: 0.6424702058504875
- Micro F1 score on K&H+N: 0.913612019197329
- Micro F1 score on ROOT09: 0.8693199623942337
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8430952380952381
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 1
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
jpcompartir/579-private-v3
|
jpcompartir
| 2022-11-22T16:58:43Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-22T16:58:31Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3000 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2.9621969030370343e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 3000,
"warmup_steps": 300,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
SweepCake/LunarLander-v2-PPO-HFcourse
|
SweepCake
| 2022-11-22T15:44:29Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-22T15:44:07Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 239.22 +/- 13.04
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Dumeng/distilbert-base-uncased-finetuned-emotion
|
Dumeng
| 2022-11-22T15:11:40Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-19T19:49:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/oryxspioenkop
|
huggingtweets
| 2022-11-22T15:10:21Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-22T15:09:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/oryxspioenkop/1669129816805/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/929707102083395584/tCWiYbO1_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Oryx</div>
<div style="text-align: center; font-size: 14px;">@oryxspioenkop</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Oryx.
| Data | Oryx |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 2219 |
| Short tweets | 266 |
| Tweets kept | 761 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/qbqfz863/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @oryxspioenkop's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2es3q78b) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2es3q78b/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/oryxspioenkop')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Dundalia/lfqa_covid
|
Dundalia
| 2022-11-22T15:07:37Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-22T14:39:45Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: lfqa_covid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lfqa_covid
This model is a fine-tuned version of [vblagoje/bart_lfqa](https://huggingface.co/vblagoje/bart_lfqa) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1028
- Bleu: 0.0
- Gen Len: 19.8564
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:----:|:-------:|
| 1.5923 | 1.0 | 808 | 0.1028 | 0.0 | 19.8564 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
bitsanlp/deberta-v3-base_base
|
bitsanlp
| 2022-11-22T14:37:33Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-22T13:49:27Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-base_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base_base
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 28
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
gary109/ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-v3
|
gary109
| 2022-11-22T14:06:09Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"dataset:ai_light_dance",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-22T08:33:10Z |
---
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
datasets:
- ai_light_dance
metrics:
- wer
model-index:
- name: ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-v3
This model is a fine-tuned version of [gary109/ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-v3](https://huggingface.co/gary109/ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-v3) on the GARY109/AI_LIGHT_DANCE - ONSET-IDMT-SMT-DRUMS-V2+MDBDRUMS dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5550
- Wer: 0.3147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1747 | 1.0 | 45 | 0.5638 | 0.3337 |
| 0.2339 | 2.0 | 90 | 0.5785 | 0.3254 |
| 0.2849 | 3.0 | 135 | 0.5586 | 0.3397 |
| 0.2396 | 4.0 | 180 | 0.5868 | 0.3266 |
| 0.2272 | 5.0 | 225 | 0.6052 | 0.3230 |
| 0.2497 | 6.0 | 270 | 0.5913 | 0.3278 |
| 0.2218 | 7.0 | 315 | 0.5926 | 0.3349 |
| 0.2584 | 8.0 | 360 | 0.5617 | 0.3218 |
| 0.2741 | 9.0 | 405 | 0.5901 | 0.3230 |
| 0.2481 | 10.0 | 450 | 0.5860 | 0.3278 |
| 0.2504 | 11.0 | 495 | 0.5991 | 0.3123 |
| 0.2125 | 12.0 | 540 | 0.5992 | 0.3218 |
| 0.2482 | 13.0 | 585 | 0.5756 | 0.3194 |
| 0.2135 | 14.0 | 630 | 0.5836 | 0.3302 |
| 0.2345 | 15.0 | 675 | 0.6347 | 0.3254 |
| 0.1912 | 16.0 | 720 | 0.6160 | 0.3206 |
| 0.2117 | 17.0 | 765 | 0.6268 | 0.3099 |
| 0.2217 | 18.0 | 810 | 0.6873 | 0.3182 |
| 0.2165 | 19.0 | 855 | 0.6721 | 0.3159 |
| 0.207 | 20.0 | 900 | 0.6312 | 0.3206 |
| 0.2263 | 21.0 | 945 | 0.6223 | 0.3290 |
| 0.2015 | 22.0 | 990 | 0.6319 | 0.3182 |
| 0.1997 | 23.0 | 1035 | 0.6527 | 0.3135 |
| 0.2318 | 24.0 | 1080 | 0.5987 | 0.3278 |
| 0.2196 | 25.0 | 1125 | 0.6269 | 0.3242 |
| 0.2298 | 26.0 | 1170 | 0.5774 | 0.3254 |
| 0.2117 | 27.0 | 1215 | 0.5938 | 0.3027 |
| 0.2553 | 28.0 | 1260 | 0.5831 | 0.3123 |
| 0.226 | 29.0 | 1305 | 0.6151 | 0.3099 |
| 0.1635 | 30.0 | 1350 | 0.5622 | 0.3230 |
| 0.5734 | 31.0 | 1395 | 0.6198 | 0.2920 |
| 0.2196 | 32.0 | 1440 | 0.5779 | 0.3039 |
| 0.2019 | 33.0 | 1485 | 0.5866 | 0.3111 |
| 0.2222 | 34.0 | 1530 | 0.5557 | 0.3063 |
| 0.2167 | 35.0 | 1575 | 0.5740 | 0.3206 |
| 0.2011 | 36.0 | 1620 | 0.5598 | 0.3004 |
| 0.2032 | 37.0 | 1665 | 0.5550 | 0.3147 |
| 0.225 | 38.0 | 1710 | 0.5794 | 0.3099 |
| 0.2068 | 39.0 | 1755 | 0.6223 | 0.3063 |
| 0.2105 | 40.0 | 1800 | 0.5797 | 0.3039 |
| 0.1968 | 41.0 | 1845 | 0.5681 | 0.2968 |
| 0.224 | 42.0 | 1890 | 0.5742 | 0.3170 |
| 0.2351 | 43.0 | 1935 | 0.5567 | 0.3111 |
| 0.2121 | 44.0 | 1980 | 0.5893 | 0.3039 |
| 0.1913 | 45.0 | 2025 | 0.6030 | 0.3027 |
| 0.1636 | 46.0 | 2070 | 0.5812 | 0.3004 |
| 0.2062 | 47.0 | 2115 | 0.6081 | 0.3004 |
| 0.2031 | 48.0 | 2160 | 0.5610 | 0.3159 |
| 0.1892 | 49.0 | 2205 | 0.5863 | 0.3147 |
| 0.1712 | 50.0 | 2250 | 0.5943 | 0.3159 |
| 0.1886 | 51.0 | 2295 | 0.5953 | 0.3051 |
| 0.1748 | 52.0 | 2340 | 0.5761 | 0.3087 |
| 0.1705 | 53.0 | 2385 | 0.6045 | 0.2872 |
| 0.1794 | 54.0 | 2430 | 0.5731 | 0.3075 |
| 0.1815 | 55.0 | 2475 | 0.5949 | 0.2849 |
| 0.1571 | 56.0 | 2520 | 0.5663 | 0.2884 |
| 0.1902 | 57.0 | 2565 | 0.5903 | 0.2956 |
| 0.2057 | 58.0 | 2610 | 0.5820 | 0.2872 |
| 0.1904 | 59.0 | 2655 | 0.5923 | 0.2896 |
| 0.1677 | 60.0 | 2700 | 0.5769 | 0.3075 |
| 0.1859 | 61.0 | 2745 | 0.5566 | 0.3147 |
| 0.2382 | 62.0 | 2790 | 0.5849 | 0.3051 |
| 0.1753 | 63.0 | 2835 | 0.5773 | 0.3075 |
| 0.1651 | 64.0 | 2880 | 0.5877 | 0.3039 |
| 0.1781 | 65.0 | 2925 | 0.5905 | 0.3027 |
| 0.1582 | 66.0 | 2970 | 0.5800 | 0.3015 |
| 0.1538 | 67.0 | 3015 | 0.6025 | 0.3075 |
| 0.1606 | 68.0 | 3060 | 0.5758 | 0.3039 |
| 0.1522 | 69.0 | 3105 | 0.5860 | 0.2932 |
| 0.1521 | 70.0 | 3150 | 0.5896 | 0.2956 |
| 0.1592 | 71.0 | 3195 | 0.5738 | 0.3027 |
| 0.2245 | 72.0 | 3240 | 0.5782 | 0.3039 |
| 0.2185 | 73.0 | 3285 | 0.5722 | 0.3027 |
| 0.1597 | 74.0 | 3330 | 0.5891 | 0.3004 |
| 0.1713 | 75.0 | 3375 | 0.5650 | 0.3027 |
| 0.1464 | 76.0 | 3420 | 0.5860 | 0.3063 |
| 0.1551 | 77.0 | 3465 | 0.5755 | 0.3027 |
| 0.1509 | 78.0 | 3510 | 0.5895 | 0.2944 |
| 0.176 | 79.0 | 3555 | 0.5750 | 0.2992 |
| 0.1695 | 80.0 | 3600 | 0.5759 | 0.3004 |
| 0.1797 | 81.0 | 3645 | 0.5904 | 0.2992 |
| 0.1371 | 82.0 | 3690 | 0.5923 | 0.3015 |
| 0.1798 | 83.0 | 3735 | 0.5864 | 0.2992 |
| 0.1386 | 84.0 | 3780 | 0.5733 | 0.3004 |
| 0.2173 | 85.0 | 3825 | 0.5751 | 0.3004 |
| 0.151 | 86.0 | 3870 | 0.5711 | 0.2968 |
| 0.1579 | 87.0 | 3915 | 0.5750 | 0.2992 |
| 0.1328 | 88.0 | 3960 | 0.5764 | 0.2944 |
| 0.1657 | 89.0 | 4005 | 0.5769 | 0.3004 |
| 0.1353 | 90.0 | 4050 | 0.5715 | 0.2956 |
| 0.1982 | 91.0 | 4095 | 0.5754 | 0.2968 |
| 0.1687 | 92.0 | 4140 | 0.5725 | 0.2980 |
| 0.1842 | 93.0 | 4185 | 0.5750 | 0.2980 |
| 0.1893 | 94.0 | 4230 | 0.5789 | 0.2944 |
| 0.1744 | 95.0 | 4275 | 0.5750 | 0.3004 |
| 0.1745 | 96.0 | 4320 | 0.5794 | 0.2980 |
| 0.1665 | 97.0 | 4365 | 0.5755 | 0.3004 |
| 0.1569 | 98.0 | 4410 | 0.5763 | 0.2968 |
| 0.1449 | 99.0 | 4455 | 0.5779 | 0.2968 |
| 0.1469 | 100.0 | 4500 | 0.5774 | 0.2968 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
adrianccy/donut-base-sroie-fine-tuned
|
adrianccy
| 2022-11-22T13:41:56Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2022-11-22T10:33:43Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie-fine-tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie-fine-tuned
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.10.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2
|
research-backup
| 2022-11-22T12:57:06Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:39:41Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.6670436507936508
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3770053475935829
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.37388724035608306
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4802668148971651
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.558
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.33771929824561403
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.34953703703703703
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.893174627090553
- name: F1 (macro)
type: f1_macro
value: 0.8866591988732194
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7863849765258216
- name: F1 (macro)
type: f1_macro
value: 0.5308624907920565
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.5704225352112676
- name: F1 (macro)
type: f1_macro
value: 0.5510856788391408
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9581275648605412
- name: F1 (macro)
type: f1_macro
value: 0.8644516035001516
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8523973675963648
- name: F1 (macro)
type: f1_macro
value: 0.8523947470987124
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.3770053475935829
- Accuracy on SAT: 0.37388724035608306
- Accuracy on BATS: 0.4802668148971651
- Accuracy on U2: 0.33771929824561403
- Accuracy on U4: 0.34953703703703703
- Accuracy on Google: 0.558
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.893174627090553
- Micro F1 score on CogALexV: 0.7863849765258216
- Micro F1 score on EVALution: 0.5704225352112676
- Micro F1 score on K&H+N: 0.9581275648605412
- Micro F1 score on ROOT09: 0.8523973675963648
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.6670436507936508
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 5
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-1
|
research-backup
| 2022-11-22T12:14:16Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:38:20Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-1
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.743095238095238
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4839572192513369
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4896142433234421
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6375764313507504
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.862
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4868421052631579
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5046296296296297
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8862437848425494
- name: F1 (macro)
type: f1_macro
value: 0.8821974165746824
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8199530516431925
- name: F1 (macro)
type: f1_macro
value: 0.6171125235158227
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6153846153846154
- name: F1 (macro)
type: f1_macro
value: 0.6078721080640733
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9533977881338248
- name: F1 (macro)
type: f1_macro
value: 0.8639519260786466
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8752742087120025
- name: F1 (macro)
type: f1_macro
value: 0.8711564298029004
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-1
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-1/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.4839572192513369
- Accuracy on SAT: 0.4896142433234421
- Accuracy on BATS: 0.6375764313507504
- Accuracy on U2: 0.4868421052631579
- Accuracy on U4: 0.5046296296296297
- Accuracy on Google: 0.862
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-1/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8862437848425494
- Micro F1 score on CogALexV: 0.8199530516431925
- Micro F1 score on EVALution: 0.6153846153846154
- Micro F1 score on K&H+N: 0.9533977881338248
- Micro F1 score on ROOT09: 0.8752742087120025
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-1/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.743095238095238
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-1")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 5
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 1
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-1/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
echarlaix/vit-food101-int8
|
echarlaix
| 2022-11-22T10:48:21Z | 24 | 0 |
transformers
|
[
"transformers",
"openvino",
"vit",
"image-classification",
"int8",
"dataset:food101",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-27T16:58:41Z |
---
license: apache-2.0
datasets:
- food101
tags:
- openvino
- int8
---
## [Vision Transformer (ViT)](https://huggingface.co/juliensimon/autotrain-food101-1471154050) quantized and exported to the OpenVINO IR.
## Model Details
**Model Description:** This ViT model fine-tuned on Food-101 was statically quantized and exported to the OpenVINO IR using [optimum](https://huggingface.co/docs/optimum/intel/optimization_ov).
## Usage example
You can use this model with Transformers *pipeline*.
```python
from transformers import pipeline, AutoFeatureExtractor
from optimum.intel.openvino import OVModelForImageClassification
model_id = "echarlaix/vit-food101-int8"
model = OVModelForImageClassification.from_pretrained(model_id)
feature_extractor = AutoFeatureExtractor.from_pretrained(model_id)
pipe = pipeline("image-classification", model=model, feature_extractor=feature_extractor)
outputs = pipe("http://farm2.staticflickr.com/1375/1394861946_171ea43524_z.jpg")
```
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-a-nce-2
|
research-backup
| 2022-11-22T10:47:13Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:32:09Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-nce-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8508333333333333
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4304812834224599
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.42729970326409494
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.44580322401334077
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.63
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3684210526315789
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4375
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8832303751695043
- name: F1 (macro)
type: f1_macro
value: 0.8741977324174292
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8166666666666667
- name: F1 (macro)
type: f1_macro
value: 0.591110337920912
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6240520043336945
- name: F1 (macro)
type: f1_macro
value: 0.6033252228331162
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9563886763580719
- name: F1 (macro)
type: f1_macro
value: 0.8721700434002555
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8602319022250078
- name: F1 (macro)
type: f1_macro
value: 0.8623792536691078
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-nce-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-nce-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.4304812834224599
- Accuracy on SAT: 0.42729970326409494
- Accuracy on BATS: 0.44580322401334077
- Accuracy on U2: 0.3684210526315789
- Accuracy on U4: 0.4375
- Accuracy on Google: 0.63
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-nce-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8832303751695043
- Micro F1 score on CogALexV: 0.8166666666666667
- Micro F1 score on EVALution: 0.6240520043336945
- Micro F1 score on K&H+N: 0.9563886763580719
- Micro F1 score on ROOT09: 0.8602319022250078
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-nce-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8508333333333333
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-nce-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 5
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-nce-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2
|
research-backup
| 2022-11-22T10:11:20Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:30:34Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8311904761904761
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.47058823529411764
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.47774480712166173
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5630906058921623
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.746
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4605263157894737
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.48148148148148145
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9136658128672593
- name: F1 (macro)
type: f1_macro
value: 0.9119300574747814
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8356807511737089
- name: F1 (macro)
type: f1_macro
value: 0.6445552217787743
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6598049837486457
- name: F1 (macro)
type: f1_macro
value: 0.6390833044290024
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9680740070946651
- name: F1 (macro)
type: f1_macro
value: 0.9022447613880005
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.880288310874334
- name: F1 (macro)
type: f1_macro
value: 0.8774948713508829
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.47058823529411764
- Accuracy on SAT: 0.47774480712166173
- Accuracy on BATS: 0.5630906058921623
- Accuracy on U2: 0.4605263157894737
- Accuracy on U4: 0.48148148148148145
- Accuracy on Google: 0.746
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9136658128672593
- Micro F1 score on CogALexV: 0.8356807511737089
- Micro F1 score on EVALution: 0.6598049837486457
- Micro F1 score on K&H+N: 0.9680740070946651
- Micro F1 score on ROOT09: 0.880288310874334
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8311904761904761
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
DigitalUmuganda/lingala_vits_tts
|
DigitalUmuganda
| 2022-11-22T10:08:11Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-11-21T22:12:13Z |
# Lingala Text-to-Speech
This model was trained on the OpenSLR's 71.6 hours aligned lingala bible dataset.
## Model description
A Conditional Variational Autoencoder with Adversarial Learning(VITS), which is an end-to-end approach to the text-to-speech task. To train the model, we used the espnet2 toolkit.
## Usage
First install espnet2
``` sh
pip install espnet
```
Download the model and the config files from this repo.
To generate a wav file using this model, run the following:
``` sh
from espnet2.bin.tts_inference import Text2Speech
import soundfile as sf
text2speech = Text2Speech(train_config="config.yaml",model_file="train.total_count.best.pth")
wav = text2speech("oyo kati na Ye ozwi lisiko mpe bolimbisi ya masumu")["wav"]
sf.write("outfile.wav", wav.numpy(), text2speech.fs, "PCM_16")
```
|
Vandita/distilroberta-base-finetuned-SarcojiComplEmojisDistilRoberta-baseMLM1
|
Vandita
| 2022-11-22T10:00:23Z | 210 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-22T09:46:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-SarcojiComplEmojisDistilRoberta-baseMLM1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-SarcojiComplEmojisDistilRoberta-baseMLM1
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2176 | 1.0 | 768 | 2.9178 |
| 2.9632 | 2.0 | 1536 | 2.8355 |
| 2.9201 | 3.0 | 2304 | 2.8462 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-1
|
research-backup
| 2022-11-22T09:36:49Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:29:06Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-1
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8196825396825397
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.56951871657754
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5667655786350149
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.7048360200111173
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.928
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5219298245614035
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5254629629629629
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9171312339912611
- name: F1 (macro)
type: f1_macro
value: 0.9144097053161149
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8591549295774648
- name: F1 (macro)
type: f1_macro
value: 0.6897906667708522
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6598049837486457
- name: F1 (macro)
type: f1_macro
value: 0.6435072053448491
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9591708979620227
- name: F1 (macro)
type: f1_macro
value: 0.8844226567513357
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8990911939830774
- name: F1 (macro)
type: f1_macro
value: 0.8971436130443764
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-1
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-1/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.56951871657754
- Accuracy on SAT: 0.5667655786350149
- Accuracy on BATS: 0.7048360200111173
- Accuracy on U2: 0.5219298245614035
- Accuracy on U4: 0.5254629629629629
- Accuracy on Google: 0.928
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-1/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9171312339912611
- Micro F1 score on CogALexV: 0.8591549295774648
- Micro F1 score on EVALution: 0.6598049837486457
- Micro F1 score on K&H+N: 0.9591708979620227
- Micro F1 score on ROOT09: 0.8990911939830774
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-1/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8196825396825397
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-1")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 1
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-1/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.