modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
BigTooth/DialoGPT-Megumin | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | 2022-08-16T08:03:06Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- naem1023/aihub-dialogue
model-index:
- name: bart-v2-dialouge
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-v2-dialouge
This model is a fine-tuned version of [hyunwoongko/kobart](https://huggingface.co/hyunwoongko/kobart) on the naem1023/aihub-dialogue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 150
- eval_batch_size: 40
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 1200
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6.0
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BigTooth/DialoGPT-small-tohru | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | 2022-08-16T08:07:40Z | ---
tags:
- Issue_fixed
- textattack
- textclassification
- entailment
license: mit
datasets:
- mnli
metrics:
- accuracy
---
Fixed label mapping issue for textattack/bert-base-uncased-MNLI, if using the original model, the predicted label has systematic confusion with the huggingface MNLI dataset. See the Github issue: https://github.com/QData/TextAttack/issues/684. The fixed accuracy_mm is 84.44% and is 7% before the fix applied. |
Bilz/DialoGPT-small-harrypotter | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 1176.34 +/- 238.60
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
checkpoint = load_from_hub(
repo_id="Saraswati/a2c-AntBulletEnv-v0",
filename="{MODEL FILENAME}.zip",
)
...
```
|
BinksSachary/ShaxxBot2 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | --alpha_ce 0.0 --alpha_mlm 2.0 --alpha_cos 0.0 --alpha_act 1.0 --alpha_clm 0.0 --mlm \ |
Blazeolmo/Scrabunzi | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
datasets:
- squad_es
model-index:
- name: tiny-bert-qa-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-bert-qa-es
This model is a fine-tuned version of [CenIA/albert-tiny-spanish](https://huggingface.co/CenIA/albert-tiny-spanish) on the squad_es dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Blerrrry/Kkk | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-16T10:01:27Z | This is the IndicBART model fine-tuned on the PMI and PIB dataset for XX to En translation. For detailed documentation look here: https://indicnlp.ai4bharat.org/indic-bart/ and https://github.com/AI4Bharat/indic-bart/
Usage:
```
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/IndicBART-XXEN", do_lower_case=False, use_fast=False, keep_accents=True)
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/IndicBART-XXEN", do_lower_case=False, use_fast=False, keep_accents=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/IndicBART-XXEN")
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/IndicBART-XXEN")
# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ['<2as>', '<2bn>', '<2en>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>']
# First tokenize the input and outputs. The format below is how IndicBART-XXEN was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
inp = tokenizer("मैं एक लड़का हूँ </s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
out = tokenizer("<2en> I am a boy </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])
# For loss
model_outputs.loss ## This is not label smoothed.
# For logits
model_outputs.logits
# For generation. Pardon the messiness. Note the decoder_start_token_id.
model.eval() # Set dropouts to zero
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
# Decode to get output strings
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # I am a boy
```
Notes:
1. This is compatible with the latest version of transformers but was developed with version 4.3.2 so consider using 4.3.2 if possible.
2. While I have only shown how to let logits and loss and how to generate outputs, you can do pretty much everything the MBartForConditionalGeneration class can do as in https://huggingface.co/docs/transformers/model_doc/mbart#transformers.MBartForConditionalGeneration
3. Note that the tokenizer I have used is based on sentencepiece and not BPE. Therefore I use the AlbertTokenizer class and not the MBartTokenizer class. |
BlightZz/MakiseKurisu | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
datasets:
- squad_v2
language: en
license: mit
pipeline_tag: question-answering
tags:
- deberta
- deberta-v3
model-index:
- name: navteca/deberta-v3-base-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 83.8248
verified: true
- name: F1
type: f1
value: 87.41
verified: true
- task:
type: question-answering
name: Question Answering
dataset:
name: squad
type: squad
config: plain_text
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 84.9678
verified: true
- name: F1
type: f1
value: 92.2777
verified: true
---
# Deberta v3 base model for QA (SQuAD 2.0)
This is the [deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
## Training Data
The models have been trained on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
It can be used for question answering task.
## Usage and Performance
The trained model can be used like this:
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
# Load model & tokenizer
deberta_model = AutoModelForQuestionAnswering.from_pretrained('navteca/deberta-v3-base-squad2')
deberta_tokenizer = AutoTokenizer.from_pretrained('navteca/deberta-v3-base-squad2')
# Get predictions
nlp = pipeline('question-answering', model=deberta_model, tokenizer=deberta_tokenizer)
result = nlp({
'question': 'How many people live in Berlin?',
'context': 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'
})
print(result)
#{
# "answer": "3,520,031"
# "end": 36,
# "score": 0.96186668,
# "start": 27,
#}
```
## Author
[deepset](http://deepset.ai/)
|
BobBraico/bert-finetuned-ner | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- banking77
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-banking77-classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: banking77
type: banking77
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9321428571428572
- task:
type: text-classification
name: Text Classification
dataset:
name: banking77
type: banking77
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9321428571428572
verified: true
- name: Precision Macro
type: precision
value: 0.9339627666926148
verified: true
- name: Precision Micro
type: precision
value: 0.9321428571428572
verified: true
- name: Precision Weighted
type: precision
value: 0.9339627666926148
verified: true
- name: Recall Macro
type: recall
value: 0.9321428571428572
verified: true
- name: Recall Micro
type: recall
value: 0.9321428571428572
verified: true
- name: Recall Weighted
type: recall
value: 0.9321428571428572
verified: true
- name: F1 Macro
type: f1
value: 0.9320514513719953
verified: true
- name: F1 Micro
type: f1
value: 0.9321428571428572
verified: true
- name: F1 Weighted
type: f1
value: 0.9320514513719956
verified: true
- name: loss
type: loss
value: 0.30337899923324585
verified: true
widget:
- text: 'Can I track the card you sent to me? '
example_title: Card Arrival Example - English
- text: 'Posso tracciare la carta che mi avete spedito? '
example_title: Card Arrival Example - Italian
- text: Can you explain your exchange rate policy to me?
example_title: Exchange Rate Example - English
- text: Potete spiegarmi la vostra politica dei tassi di cambio?
example_title: Exchange Rate Example - Italian
- text: I can't pay by my credit card
example_title: Card Not Working Example - English
- text: Non riesco a pagare con la mia carta di credito
example_title: Card Not Working Example - Italian
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-banking77-classification
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the banking77 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3034
- Accuracy: 0.9321
- F1 Score: 0.9321
## Model description
Experiment on a cross-language model to assess how accurate the classification is by using for fine tuning an English dataset but later querying the model in Italian.
## Intended uses & limitations
The model can be used on text classification. In particular is fine tuned on banking domain for multilingual task.
## Training and evaluation data
The dataset used is [banking77](https://huggingface.co/datasets/banking77)
The 77 labels are:
|label|intent|
|:---:|:----:|
|0|activate_my_card|
|1|age_limit|
|2|apple_pay_or_google_pay|
|3|atm_support|
|4|automatic_top_up|
|5|balance_not_updated_after_bank_transfer|
|6|balance_not_updated_after_cheque_or_cash_deposit|
|7|beneficiary_not_allowed|
|8|cancel_transfer|
|9|card_about_to_expire|
|10|card_acceptance|
|11|card_arrival|
|12|card_delivery_estimate|
|13|card_linking|
|14|card_not_working|
|15|card_payment_fee_charged|
|16|card_payment_not_recognised|
|17|card_payment_wrong_exchange_rate|
|18|card_swallowed|
|19|cash_withdrawal_charge|
|20|cash_withdrawal_not_recognised|
|21|change_pin|
|22|compromised_card|
|23|contactless_not_working|
|24|country_support|
|25|declined_card_payment|
|26|declined_cash_withdrawal|
|27|declined_transfer|
|28|direct_debit_payment_not_recognised|
|29|disposable_card_limits|
|30|edit_personal_details|
|31|exchange_charge|
|32|exchange_rate|
|33|exchange_via_app|
|34|extra_charge_on_statement|
|35|failed_transfer|
|36|fiat_currency_support|
|37|get_disposable_virtual_card|
|38|get_physical_card|
|39|getting_spare_card|
|40|getting_virtual_card|
|41|lost_or_stolen_card|
|42|lost_or_stolen_phone|
|43|order_physical_card|
|44|passcode_forgotten|
|45|pending_card_payment|
|46|pending_cash_withdrawal|
|47|pending_top_up|
|48|pending_transfer|
|49|pin_blocked|
|50|receiving_money|
|51|Refund_not_showing_up|
|52|request_refund|
|53|reverted_card_payment?|
|54|supported_cards_and_currencies|
|55|terminate_account|
|56|top_up_by_bank_transfer_charge|
|57|top_up_by_card_charge|
|58|top_up_by_cash_or_cheque|
|59|top_up_failed|
|60|top_up_limits|
|61|top_up_reverted|
|62|topping_up_by_card|
|63|transaction_charged_twice|
|64|transfer_fee_charged|
|65|transfer_into_account|
|66|transfer_not_received_by_recipient|
|67|transfer_timing|
|68|unable_to_verify_identity|
|69|verify_my_identity|
|70|verify_source_of_funds|
|71|verify_top_up|
|72|virtual_card_not_working|
|73|visa_or_mastercard|
|74|why_verify_identity|
|75|wrong_amount_of_cash_received|
|76|wrong_exchange_rate_for_cash_withdrawal|
## Training procedure
```
from transformers import pipeline
pipe = pipeline("text-classification", model="nickprock/xlm-roberta-base-banking77-classification")
pipe("Non riesco a pagare con la carta di credito")
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 3.8002 | 1.0 | 157 | 2.7771 | 0.5159 | 0.4483 |
| 2.4006 | 2.0 | 314 | 1.6937 | 0.7140 | 0.6720 |
| 1.4633 | 3.0 | 471 | 1.0385 | 0.8308 | 0.8153 |
| 0.9234 | 4.0 | 628 | 0.7008 | 0.8789 | 0.8761 |
| 0.6163 | 5.0 | 785 | 0.5029 | 0.9068 | 0.9063 |
| 0.4282 | 6.0 | 942 | 0.4084 | 0.9123 | 0.9125 |
| 0.3203 | 7.0 | 1099 | 0.3515 | 0.9253 | 0.9253 |
| 0.245 | 8.0 | 1256 | 0.3295 | 0.9227 | 0.9225 |
| 0.1863 | 9.0 | 1413 | 0.3092 | 0.9269 | 0.9269 |
| 0.1518 | 10.0 | 1570 | 0.2901 | 0.9338 | 0.9338 |
| 0.1179 | 11.0 | 1727 | 0.2938 | 0.9318 | 0.9319 |
| 0.0969 | 12.0 | 1884 | 0.2906 | 0.9328 | 0.9328 |
| 0.0805 | 13.0 | 2041 | 0.2963 | 0.9295 | 0.9295 |
| 0.063 | 14.0 | 2198 | 0.2998 | 0.9289 | 0.9288 |
| 0.0554 | 15.0 | 2355 | 0.2933 | 0.9351 | 0.9349 |
| 0.046 | 16.0 | 2512 | 0.2960 | 0.9328 | 0.9326 |
| 0.04 | 17.0 | 2669 | 0.3032 | 0.9318 | 0.9318 |
| 0.035 | 18.0 | 2826 | 0.3061 | 0.9312 | 0.9312 |
| 0.0317 | 19.0 | 2983 | 0.3030 | 0.9331 | 0.9330 |
| 0.0315 | 20.0 | 3140 | 0.3034 | 0.9321 | 0.9321 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BonjinKim/dst_kor_bert | [
"pytorch",
"jax",
"bert",
"pretraining",
"transformers"
] | null | {
"architectures": [
"BertForPreTraining"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 193.41 +/- 23.10
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BotterHax/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Tokenizers 0.12.1
|
Brinah/1 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
widget:
- text: "You've just won $1000. Contact now at +9211122233 to confirm the lottery!"
example_title: "Example 1"
- text: "Hello. Are you joining us for the party tonight?"
example_title: "Example 2"
- text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book."
example_title: "Example 3"
- text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night."
example_title: "Example 4"
datasets:
- SalehAhmad/Spam-Ham
--- |
Brokette/projetCS | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- metrics:
- type: mean_reward
value: 231.98 +/- 16.70
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Brykee/DialoGPT-medium-Morty | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 2.0.0
- Tokenizers 0.10.3
|
BumBelDumBel/ZORK_AI_SCIFI | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1291
- Accuracy: 0.9429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2296 | 1.0 | 318 | 0.8290 | 0.7571 |
| 0.6433 | 2.0 | 636 | 0.4200 | 0.8961 |
| 0.3495 | 3.0 | 954 | 0.2493 | 0.9206 |
| 0.2254 | 4.0 | 1272 | 0.1835 | 0.9335 |
| 0.1726 | 5.0 | 1590 | 0.1576 | 0.9371 |
| 0.1467 | 6.0 | 1908 | 0.1442 | 0.9423 |
| 0.1318 | 7.0 | 2226 | 0.1360 | 0.9426 |
| 0.1229 | 8.0 | 2544 | 0.1323 | 0.9435 |
| 0.1185 | 9.0 | 2862 | 0.1299 | 0.9426 |
| 0.1151 | 10.0 | 3180 | 0.1291 | 0.9429 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Tokenizers 0.12.1
|
CAMeL-Lab/bert-base-arabic-camelbert-ca | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 580 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 183.96 +/- 75.63
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 449 | null | ---
language: pl
tags:
- distilherbert
---
## distilHerBERT
distilHerBERT-base is a BERT-based Language Model trained on Polish subset of [cc100](https://huggingface.co/datasets/cc100) dataset using Masked Language Modelling (MLM) and [distillation procedure](https://arxiv.org/abs/1910.01108) from model [HerBERT](https://huggingface.co/allegro/herbert-base-cased) with dynamic masking of whole words.
We provide one of the models (S4) described in the report from final project on the subject of (Deep) Natural Language Processing, which was carried out at MIMUW in 2021/2022: [Distillation_of_HerBERT](https://github.com/BartekKrzepkowski/DistilHerBERT-base_vol2/blob/master/report/Final_Report___Distillation_of_HerBERT.pdf).
The model was trained using fp16 and the data parallelism method (ZeRO Stage 2), using the deep learning optimization library - DeepSpeed.
Model training and experiments were conducted with transformers in version 4.20.1.
## Tokenizer
The training dataset was tokenized into subwords using a character level byte-pair encoding (``CharBPETokenizer``) with
a vocabulary size of 50k tokens. The tokenizer itself was trained with a [tokenizers](https://github.com/huggingface/tokenizers) library.
We kindly encourage you to use the ``Fast`` version of the tokenizer, namely ``HerbertTokenizerFast``.
## Usage
Example code:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("BartekK/distilHerBERT-base-cased")
model = AutoModelForMaskedLM.from_pretrained("BartekK/distilHerBERT-base-cased")
output = model(
**tokenizer.batch_encode_plus(
[
(
"A potem szedł środkiem drogi w kurzawie, bo zamiatał nogami, ślepy dziad prowadzony przez tłustego kundla na sznurku.",
"A potem leciał od lasu chłopak z butelką, ale ten ujrzawszy księdza przy drodze okrążył go z dala i biegł na przełaj pól do karczmy."
)
],
padding='longest',
add_special_tokens=True,
return_tensors='pt'
)
)
```
## Acknowledgements
We want to thank <br>
Spyridon Mouselinos - for suggesting literature to help with the project <br>
and <br>
Piotr Rybak - for sharing information on training the HerBERT models
## Authors
The model was trained by:
Bartłomiej Krzepkowski, <br>
Dominika Bankiewicz, <br>
Rafał Michaluk, <br>
Jacek Ciszewski.
If you have questions please contact me: <a href="mailto:[email protected]">[email protected]</a>
The code can be found here: [distilHerBERT-base repo](https://github.com/BartekKrzepkowski/DistilHerBERT-base_vol2/tree/master). |
CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
license: cc-by-4.0
language: mr
datasets:
- L3Cube-MahaCorpus
---
## MahaBERT
MahaBERT is a Marathi BERT model. It is a multilingual BERT (google/muril-base-cased) model fine-tuned on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2202.01159)
```
@inproceedings{joshi-2022-l3cube,
title = "{L}3{C}ube-{M}aha{C}orpus and {M}aha{BERT}: {M}arathi Monolingual Corpus, {M}arathi {BERT} Language Models, and Resources",
author = "Joshi, Raviraj",
booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.wildre-1.17",
pages = "97--101",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5 | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 75 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.929042904290429
- name: Recall
type: recall
value: 0.9474924267923258
- name: F1
type: f1
value: 0.9381769705049159
- name: Accuracy
type: accuracy
value: 0.985783246011656
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0641
- Precision: 0.9290
- Recall: 0.9475
- F1: 0.9382
- Accuracy: 0.9858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0867 | 1.0 | 1756 | 0.0716 | 0.9102 | 0.9297 | 0.9198 | 0.9820 |
| 0.0345 | 2.0 | 3512 | 0.0680 | 0.9290 | 0.9465 | 0.9376 | 0.9854 |
| 0.0191 | 3.0 | 5268 | 0.0641 | 0.9290 | 0.9475 | 0.9382 | 0.9858 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 71 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- f1
model-index:
- name: results
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: F1
type: f1
value: 0.9254722461324877
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1933
- Accuracy is: 0.9255
- F1: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy is | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:------:|
| 0.2232 | 1.0 | 1563 | 0.1933 | 0.9255 | 0.9255 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-korean-demo-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-korean-demo-test
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9829
- Wer: 0.5580
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 8.1603 | 0.4 | 400 | 5.0560 | 1.0 |
| 3.0513 | 0.79 | 800 | 2.1226 | 0.9984 |
| 1.7673 | 1.19 | 1200 | 1.2358 | 0.9273 |
| 1.4577 | 1.59 | 1600 | 1.0198 | 0.8512 |
| 1.3308 | 1.98 | 2000 | 0.9258 | 0.8325 |
| 1.1798 | 2.38 | 2400 | 0.8587 | 0.7933 |
| 1.1268 | 2.77 | 2800 | 0.8166 | 0.7677 |
| 1.0664 | 3.17 | 3200 | 0.7911 | 0.7428 |
| 0.9923 | 3.57 | 3600 | 0.7964 | 0.7481 |
| 1.0059 | 3.96 | 4000 | 0.7617 | 0.7163 |
| 0.9141 | 4.36 | 4400 | 0.7854 | 0.7280 |
| 0.8939 | 4.76 | 4800 | 0.7364 | 0.7160 |
| 0.8689 | 5.15 | 5200 | 0.7895 | 0.6996 |
| 0.8236 | 5.55 | 5600 | 0.7756 | 0.7100 |
| 0.8409 | 5.95 | 6000 | 0.7433 | 0.6915 |
| 0.7643 | 6.34 | 6400 | 0.7566 | 0.6993 |
| 0.7601 | 6.74 | 6800 | 0.7873 | 0.6836 |
| 0.7367 | 7.14 | 7200 | 0.7353 | 0.6640 |
| 0.7099 | 7.53 | 7600 | 0.7421 | 0.6766 |
| 0.7084 | 7.93 | 8000 | 0.7396 | 0.6740 |
| 0.6837 | 8.32 | 8400 | 0.7717 | 0.6647 |
| 0.6513 | 8.72 | 8800 | 0.7763 | 0.6798 |
| 0.6458 | 9.12 | 9200 | 0.7659 | 0.6494 |
| 0.6132 | 9.51 | 9600 | 0.7693 | 0.6511 |
| 0.6287 | 9.91 | 10000 | 0.7555 | 0.6469 |
| 0.6008 | 10.31 | 10400 | 0.7606 | 0.6408 |
| 0.5796 | 10.7 | 10800 | 0.7622 | 0.6397 |
| 0.5753 | 11.1 | 11200 | 0.7816 | 0.6510 |
| 0.5531 | 11.5 | 11600 | 0.8351 | 0.6658 |
| 0.5215 | 11.89 | 12000 | 0.7843 | 0.6416 |
| 0.5205 | 12.29 | 12400 | 0.7674 | 0.6256 |
| 0.5219 | 12.69 | 12800 | 0.7594 | 0.6287 |
| 0.5186 | 13.08 | 13200 | 0.7863 | 0.6243 |
| 0.473 | 13.48 | 13600 | 0.8209 | 0.6469 |
| 0.4938 | 13.87 | 14000 | 0.8002 | 0.6241 |
| 0.474 | 14.27 | 14400 | 0.8008 | 0.6122 |
| 0.442 | 14.67 | 14800 | 0.8047 | 0.6089 |
| 0.4521 | 15.06 | 15200 | 0.8341 | 0.6123 |
| 0.4289 | 15.46 | 15600 | 0.8217 | 0.6122 |
| 0.4278 | 15.86 | 16000 | 0.8400 | 0.6152 |
| 0.4051 | 16.25 | 16400 | 0.8634 | 0.6182 |
| 0.4063 | 16.65 | 16800 | 0.8486 | 0.6097 |
| 0.4101 | 17.05 | 17200 | 0.8825 | 0.6002 |
| 0.3896 | 17.44 | 17600 | 0.9575 | 0.6205 |
| 0.3833 | 17.84 | 18000 | 0.8946 | 0.6216 |
| 0.3678 | 18.24 | 18400 | 0.8905 | 0.5952 |
| 0.3715 | 18.63 | 18800 | 0.8918 | 0.5994 |
| 0.3748 | 19.03 | 19200 | 0.8856 | 0.5953 |
| 0.3485 | 19.42 | 19600 | 0.9326 | 0.5906 |
| 0.3522 | 19.82 | 20000 | 0.9237 | 0.5932 |
| 0.3551 | 20.22 | 20400 | 0.9274 | 0.5932 |
| 0.3339 | 20.61 | 20800 | 0.9075 | 0.5883 |
| 0.3354 | 21.01 | 21200 | 0.9306 | 0.5861 |
| 0.318 | 21.41 | 21600 | 0.8994 | 0.5854 |
| 0.3235 | 21.8 | 22000 | 0.9114 | 0.5831 |
| 0.3201 | 22.2 | 22400 | 0.9415 | 0.5867 |
| 0.308 | 22.6 | 22800 | 0.9695 | 0.5807 |
| 0.3049 | 22.99 | 23200 | 0.9166 | 0.5765 |
| 0.2858 | 23.39 | 23600 | 0.9643 | 0.5746 |
| 0.2938 | 23.79 | 24000 | 0.9461 | 0.5724 |
| 0.2856 | 24.18 | 24400 | 0.9658 | 0.5710 |
| 0.2827 | 24.58 | 24800 | 0.9534 | 0.5693 |
| 0.2745 | 24.97 | 25200 | 0.9436 | 0.5675 |
| 0.2705 | 25.37 | 25600 | 0.9849 | 0.5701 |
| 0.2656 | 25.77 | 26000 | 0.9854 | 0.5662 |
| 0.2645 | 26.16 | 26400 | 0.9795 | 0.5662 |
| 0.262 | 26.56 | 26800 | 0.9496 | 0.5626 |
| 0.2553 | 26.96 | 27200 | 0.9787 | 0.5659 |
| 0.2602 | 27.35 | 27600 | 0.9814 | 0.5640 |
| 0.2519 | 27.75 | 28000 | 0.9816 | 0.5631 |
| 0.2386 | 28.15 | 28400 | 1.0012 | 0.5580 |
| 0.2398 | 28.54 | 28800 | 0.9892 | 0.5567 |
| 0.2368 | 28.94 | 29200 | 0.9909 | 0.5590 |
| 0.2366 | 29.34 | 29600 | 0.9827 | 0.5567 |
| 0.2347 | 29.73 | 30000 | 0.9829 | 0.5580 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CAUKiel/JavaBERT-uncased | [
"pytorch",
"safetensors",
"bert",
"fill-mask",
"java",
"code",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | --alpha_ce 0.0 --alpha_mlm 0.0 --alpha_cos 0.0 --alpha_act 1.0 --alpha_clm 0.0 --mlm \ |
CBreit00/DialoGPT_small_Rick | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- feature-extraction
- endpoints-template
license: bsd-3-clause
library_name: generic
---
# Fork of [salesforce/BLIP](https://github.com/salesforce/BLIP) for a `feature-extraction` task on 🤗Inference endpoint.
This repository implements a `custom` task for `feature-extraction` for 🤗 Inference Endpoints. The code for the customized pipeline is in the [pipeline.py](https://huggingface.co/florentgbelidji/blip-embeddings/blob/main/pipeline.py).
To use deploy this model a an Inference Endpoint you have to select `Custom` as task to use the `pipeline.py` file. -> _double check if it is selected_
### expected Request payload
```json
{
"image": "/9j/4AAQSkZJRgABAQEBLAEsAAD/2wBDAAMCAgICAgMC....", // base64 image as bytes
}
```
below is an example on how to run a request using Python and `requests`.
## Run Request
1. prepare an image.
```bash
!wget https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
```
2.run request
```python
import json
from typing import List
import requests as r
import base64
ENDPOINT_URL = ""
HF_TOKEN = ""
def predict(path_to_image: str = None):
with open(path_to_image, "rb") as i:
b64 = base64.b64encode(i.read())
payload = {"inputs": {"image": b64.decode("utf-8")}}
response = r.post(
ENDPOINT_URL, headers={"Authorization": f"Bearer {HF_TOKEN}"}, json=payload
)
return response.json()
prediction = predict(
path_to_image="palace.jpg"
)
```
expected output
```python
{'feature_vector': [0.016450975090265274,
-0.5551009774208069,
0.39800673723220825,
-0.6809228658676147,
2.053842782974243,
-0.4712907075881958,...]
}
``` |
CL/safe-math-bot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7003 | 0.54 | 500 | 1.4859 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 2.0.0
- Tokenizers 0.10.3
|
CLAck/indo-pure | [
"pytorch",
"marian",
"text2text-generation",
"en",
"id",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
tags:
- audio
- spectrograms
datasets:
- teticio/audio-diffusion-256
---
De-noising Diffusion Probabilistic Model trained on [teticio/audio-diffusion-256](https://huggingface.co/datasets/teticio/audio-diffusion-256) to generate mel spectrograms of 256x256 corresponding to 5 seconds of audio. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference. |
CLAck/vi-en | [
"pytorch",
"marian",
"text2text-generation",
"en",
"vi",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-small-finetuned-text2log-finetuned-nl-to-fol-finetuned-nl-to-fol-finetuned-nl-to-fol-version2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-text2log-finetuned-nl-to-fol-finetuned-nl-to-fol-finetuned-nl-to-fol-version2
This model is a fine-tuned version of [anki08/t5-small-finetuned-text2log-finetuned-nl-to-fol-finetuned-nl-to-fol](https://huggingface.co/anki08/t5-small-finetuned-text2log-finetuned-nl-to-fol-finetuned-nl-to-fol) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0069
- Bleu: 28.1311
- Gen Len: 18.7412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 22 | 0.0692 | 27.4908 | 18.7353 |
| No log | 2.0 | 44 | 0.0631 | 27.554 | 18.7294 |
| No log | 3.0 | 66 | 0.0533 | 27.6007 | 18.7294 |
| No log | 4.0 | 88 | 0.0484 | 27.6446 | 18.7294 |
| No log | 5.0 | 110 | 0.0439 | 27.6401 | 18.7294 |
| No log | 6.0 | 132 | 0.0404 | 27.5117 | 18.7294 |
| No log | 7.0 | 154 | 0.0389 | 27.6358 | 18.7294 |
| No log | 8.0 | 176 | 0.0362 | 27.6358 | 18.7294 |
| No log | 9.0 | 198 | 0.0339 | 27.5731 | 18.7294 |
| No log | 10.0 | 220 | 0.0319 | 27.2326 | 18.6882 |
| No log | 11.0 | 242 | 0.0298 | 27.2326 | 18.6882 |
| No log | 12.0 | 264 | 0.0293 | 27.5498 | 18.7294 |
| No log | 13.0 | 286 | 0.0276 | 27.6566 | 18.7294 |
| No log | 14.0 | 308 | 0.0268 | 27.6566 | 18.7294 |
| No log | 15.0 | 330 | 0.0251 | 27.6107 | 18.7294 |
| No log | 16.0 | 352 | 0.0239 | 27.7096 | 18.7294 |
| No log | 17.0 | 374 | 0.0228 | 27.6716 | 18.7294 |
| No log | 18.0 | 396 | 0.0231 | 27.8083 | 18.7294 |
| No log | 19.0 | 418 | 0.0218 | 27.4838 | 18.6882 |
| No log | 20.0 | 440 | 0.0212 | 27.4712 | 18.6882 |
| No log | 21.0 | 462 | 0.0197 | 27.8787 | 18.7353 |
| No log | 22.0 | 484 | 0.0207 | 27.6899 | 18.6941 |
| 0.1026 | 23.0 | 506 | 0.0186 | 27.6376 | 18.6941 |
| 0.1026 | 24.0 | 528 | 0.0202 | 27.6672 | 18.6941 |
| 0.1026 | 25.0 | 550 | 0.0174 | 28.0172 | 18.7412 |
| 0.1026 | 26.0 | 572 | 0.0170 | 27.8714 | 18.7412 |
| 0.1026 | 27.0 | 594 | 0.0164 | 27.7423 | 18.7412 |
| 0.1026 | 28.0 | 616 | 0.0164 | 27.8278 | 18.7412 |
| 0.1026 | 29.0 | 638 | 0.0163 | 27.8278 | 18.7412 |
| 0.1026 | 30.0 | 660 | 0.0158 | 27.907 | 18.7412 |
| 0.1026 | 31.0 | 682 | 0.0165 | 27.7752 | 18.7412 |
| 0.1026 | 32.0 | 704 | 0.0147 | 27.8284 | 18.7412 |
| 0.1026 | 33.0 | 726 | 0.0150 | 27.8862 | 18.7412 |
| 0.1026 | 34.0 | 748 | 0.0148 | 27.8402 | 18.7412 |
| 0.1026 | 35.0 | 770 | 0.0141 | 27.8353 | 18.7412 |
| 0.1026 | 36.0 | 792 | 0.0142 | 27.858 | 18.7412 |
| 0.1026 | 37.0 | 814 | 0.0143 | 27.858 | 18.7412 |
| 0.1026 | 38.0 | 836 | 0.0158 | 27.8353 | 18.7412 |
| 0.1026 | 39.0 | 858 | 0.0125 | 27.8913 | 18.7412 |
| 0.1026 | 40.0 | 880 | 0.0121 | 27.9167 | 18.7412 |
| 0.1026 | 41.0 | 902 | 0.0122 | 27.9569 | 18.7412 |
| 0.1026 | 42.0 | 924 | 0.0126 | 27.9569 | 18.7412 |
| 0.1026 | 43.0 | 946 | 0.0120 | 28.001 | 18.7412 |
| 0.1026 | 44.0 | 968 | 0.0125 | 28.0079 | 18.7412 |
| 0.1026 | 45.0 | 990 | 0.0115 | 28.0079 | 18.7412 |
| 0.072 | 46.0 | 1012 | 0.0113 | 27.9851 | 18.7412 |
| 0.072 | 47.0 | 1034 | 0.0113 | 28.0184 | 18.7412 |
| 0.072 | 48.0 | 1056 | 0.0110 | 28.0184 | 18.7412 |
| 0.072 | 49.0 | 1078 | 0.0108 | 28.0184 | 18.7412 |
| 0.072 | 50.0 | 1100 | 0.0107 | 28.0184 | 18.7412 |
| 0.072 | 51.0 | 1122 | 0.0101 | 28.0184 | 18.7412 |
| 0.072 | 52.0 | 1144 | 0.0102 | 28.0184 | 18.7412 |
| 0.072 | 53.0 | 1166 | 0.0099 | 28.0184 | 18.7412 |
| 0.072 | 54.0 | 1188 | 0.0100 | 28.0184 | 18.7412 |
| 0.072 | 55.0 | 1210 | 0.0102 | 28.0184 | 18.7412 |
| 0.072 | 56.0 | 1232 | 0.0095 | 28.0184 | 18.7412 |
| 0.072 | 57.0 | 1254 | 0.0098 | 28.0184 | 18.7412 |
| 0.072 | 58.0 | 1276 | 0.0092 | 28.0184 | 18.7412 |
| 0.072 | 59.0 | 1298 | 0.0090 | 28.0184 | 18.7412 |
| 0.072 | 60.0 | 1320 | 0.0095 | 28.0184 | 18.7412 |
| 0.072 | 61.0 | 1342 | 0.0092 | 27.9674 | 18.7412 |
| 0.072 | 62.0 | 1364 | 0.0091 | 27.9419 | 18.7412 |
| 0.072 | 63.0 | 1386 | 0.0100 | 27.9419 | 18.7412 |
| 0.072 | 64.0 | 1408 | 0.0084 | 28.0752 | 18.7412 |
| 0.072 | 65.0 | 1430 | 0.0086 | 28.0192 | 18.7412 |
| 0.072 | 66.0 | 1452 | 0.0084 | 28.0192 | 18.7412 |
| 0.072 | 67.0 | 1474 | 0.0085 | 28.0192 | 18.7412 |
| 0.072 | 68.0 | 1496 | 0.0087 | 28.0192 | 18.7412 |
| 0.0575 | 69.0 | 1518 | 0.0084 | 28.0192 | 18.7412 |
| 0.0575 | 70.0 | 1540 | 0.0080 | 28.0192 | 18.7412 |
| 0.0575 | 71.0 | 1562 | 0.0082 | 28.0192 | 18.7412 |
| 0.0575 | 72.0 | 1584 | 0.0080 | 28.0192 | 18.7412 |
| 0.0575 | 73.0 | 1606 | 0.0075 | 28.0192 | 18.7412 |
| 0.0575 | 74.0 | 1628 | 0.0079 | 28.0192 | 18.7412 |
| 0.0575 | 75.0 | 1650 | 0.0078 | 28.0752 | 18.7412 |
| 0.0575 | 76.0 | 1672 | 0.0076 | 28.1311 | 18.7412 |
| 0.0575 | 77.0 | 1694 | 0.0073 | 28.1311 | 18.7412 |
| 0.0575 | 78.0 | 1716 | 0.0074 | 28.1311 | 18.7412 |
| 0.0575 | 79.0 | 1738 | 0.0072 | 28.1311 | 18.7412 |
| 0.0575 | 80.0 | 1760 | 0.0078 | 28.1311 | 18.7412 |
| 0.0575 | 81.0 | 1782 | 0.0077 | 28.1311 | 18.7412 |
| 0.0575 | 82.0 | 1804 | 0.0071 | 28.1311 | 18.7412 |
| 0.0575 | 83.0 | 1826 | 0.0072 | 28.1311 | 18.7412 |
| 0.0575 | 84.0 | 1848 | 0.0075 | 28.1311 | 18.7412 |
| 0.0575 | 85.0 | 1870 | 0.0071 | 28.1311 | 18.7412 |
| 0.0575 | 86.0 | 1892 | 0.0070 | 28.1311 | 18.7412 |
| 0.0575 | 87.0 | 1914 | 0.0069 | 28.1311 | 18.7412 |
| 0.0575 | 88.0 | 1936 | 0.0069 | 28.1311 | 18.7412 |
| 0.0575 | 89.0 | 1958 | 0.0069 | 28.1311 | 18.7412 |
| 0.0575 | 90.0 | 1980 | 0.0069 | 28.1311 | 18.7412 |
| 0.0509 | 91.0 | 2002 | 0.0069 | 28.1311 | 18.7412 |
| 0.0509 | 92.0 | 2024 | 0.0070 | 28.1311 | 18.7412 |
| 0.0509 | 93.0 | 2046 | 0.0069 | 28.1311 | 18.7412 |
| 0.0509 | 94.0 | 2068 | 0.0070 | 28.1311 | 18.7412 |
| 0.0509 | 95.0 | 2090 | 0.0069 | 28.1311 | 18.7412 |
| 0.0509 | 96.0 | 2112 | 0.0069 | 28.1311 | 18.7412 |
| 0.0509 | 97.0 | 2134 | 0.0069 | 28.1311 | 18.7412 |
| 0.0509 | 98.0 | 2156 | 0.0069 | 28.1311 | 18.7412 |
| 0.0509 | 99.0 | 2178 | 0.0069 | 28.1311 | 18.7412 |
| 0.0509 | 100.0 | 2200 | 0.0069 | 28.1311 | 18.7412 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CLEE/CLEE | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.923246780342909
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2160
- Accuracy: 0.923
- F1: 0.9232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8173 | 1.0 | 250 | 0.3152 | 0.8995 | 0.8957 |
| 0.2408 | 2.0 | 500 | 0.2160 | 0.923 | 0.9232 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CLTL/gm-ner-xlmrbase | [
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"nl",
"transformers",
"dighum",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3264
- eval_accuracy: 0.8867
- eval_f1: 0.8896
- eval_runtime: 253.6051
- eval_samples_per_second: 1.183
- eval_steps_per_second: 0.075
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CLTL/icf-domains | [
"pytorch",
"roberta",
"nl",
"transformers",
"license:mit",
"text-classification"
] | text-classification | {
"architectures": [
"RobertaForMultiLabelSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 35 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 285.40 +/- 14.55
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CLTL/icf-levels-adm | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-base-future
results: []
widget:
- text: "We will have a good time."
example_title: "Positive"
- text: "We had a good time."
example_title: "Negative"
---
# distilbert-base-future
## Table of Contents
- [Model description](#model_description)
- [Intended uses & limitations](#intended_uses_&_limitations)
- [Training and evaluation data](#training_and_evaluation_data)
- [Training procedure](#training_procedure)
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [future-statements dataset](https://huggingface.co/datasets/fidsinn/future-statements).
It achieves the following results on the evaluation set:
- Train Loss: 0.1142
- Train Sparse Categorical Accuracy: 0.9613
- Validation Loss: 0.1272
- Validation Sparse Categorical Accuracy: 0.9625
- Epoch: 1
## Model description
- The model was created by graduate students [D. Baradari](https://huggingface.co/Dunya), [F. Bartels](https://huggingface.co/fidsinn), A. Dewald, [J. Peters](https://huggingface.co/jpeters92) as part of a data science module of the University of Leipzig.
- Model was created on 11/08/22.
- This is version 1.0
- The model is a text classification model which is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased)
- Questions and comments can be send via the [community tab](https://huggingface.co/fidsinn/distilbert-base-future/discussions)
## Intended uses & limitations
- The primary intended use is the classification of input into a future or non-future sentence/statement.
- The model is primarily intended to be used by researchers to filter or label a large number of sentences according to the grammatical tense of the input.
## Training and evaluation data
- [Distilbert-base-future model](https://huggingface.co/fidsinn/distilbert-base-future) was trained and evaluated on the [future-statements dataset](https://huggingface.co/datasets/fidsinn/future-statements).
- [future-statements](https://huggingface.co/datasets/fidsinn/future-statements) is a dataset collected manually and automatically by graduate students [D. Baradari](https://huggingface.co/Dunya), [F. Bartels](https://huggingface.co/fidsinn), A. Dewald, [J. Peters](https://huggingface.co/jpeters92) of the University of Leipzig.
- We collected 2500 statements, 50% of which relate to future events and 50% of which relate to non-future events.
- The sole purpose of the dataset was the fine-tuning process of this model.
- Additional information on the dataset can be found on Huggingface: [future-statements dataset](https://huggingface.co/datasets/fidsinn/future-statements).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 0.3816 | 0.8594 | 0.1547 | 0.9475 | 0 |
| 0.1142 | 0.9613 | 0.1272 | 0.9625 | 1 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CLTL/icf-levels-att | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 32 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cuad
model-index:
- name: bert-small-finetuned-cuad-full-longer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-finetuned-cuad-full-longer
This model is a fine-tuned version of [muhtasham/bert-small-finetuned-cuad-full](https://huggingface.co/muhtasham/bert-small-finetuned-cuad-full) on the cuad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0306 | 1.0 | 23785 | 0.0263 |
| 0.025 | 2.0 | 47570 | 0.0275 |
| 0.022 | 3.0 | 71355 | 0.0295 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CLTL/icf-levels-ber | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- ko
license: mit
---
# smartmind/ko-sbert-augSTS-maxlength512
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
This model is [snunlp/KR-SBERT-V40K-klueNLI-augSTS](https://huggingface.co/snunlp/KR-SBERT-V40K-klueNLI-augSTS) with max input length 512.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('smartmind/ko-sbert-augSTS-maxlength512')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('smartmind/ko-sbert-augSTS-maxlength512')
model = AutoModel.from_pretrained('smartmind/ko-sbert-augSTS-maxlength512')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=snunlp/KR-SBERT-V40K-klueNLI-augSTS)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Application for document classification
Tutorial in Google Colab: https://colab.research.google.com/drive/1S6WSjOx9h6Wh_rX1Z2UXwx9i_uHLlOiM
|Model|Accuracy|
|-|-|
|KR-SBERT-Medium-NLI-STS|0.8400|
|KR-SBERT-V40K-NLI-STS|0.8400|
|KR-SBERT-V40K-NLI-augSTS|0.8511|
|KR-SBERT-V40K-klueNLI-augSTS|**0.8628**|
## Citation
```bibtex
@misc{kr-sbert,
author = {Park, Suzi and Hyopil Shin},
title = {KR-SBERT: A Pre-trained Korean-specific Sentence-BERT model},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/snunlp/KR-SBERT}}
}
``` |
CLTL/icf-levels-enr | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: QRDQN
results:
- metrics:
- type: mean_reward
value: 3510.00 +/- 4506.87
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **QRDQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **QRDQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -orga rebolforces -f logs/
python enjoy.py --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rebolforces
```
## Hyperparameters
```python
OrderedDict([('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_fraction', 0.025),
('frame_stack', 4),
('n_timesteps', 20000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('replay_buffer_kwargs', 'dict(handle_timeout_termination=False)'),
('normalize', False)])
```
|
CLTL/icf-levels-etn | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
tags:
- generated_from_keras_callback
model-index:
- name: Mostafa3zazi/arabicQA-finetuned-squad_arcd
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Mostafa3zazi/arabicQA-finetuned-squad_arcd
This model is a fine-tuned version of [aubmindlab/araelectra-base-discriminator](https://huggingface.co/aubmindlab/araelectra-base-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.9073
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 3e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 3034, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 50, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.9073 | 0 |
### Framework versions
- Transformers 4.21.1
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CM-CA/DialoGPT-small-cartman | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-tw-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tw-small
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
CNT-UPenn/Bio_ClinicalBERT_for_seizureFreedom_classification | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
tags:
- generated_from_keras_callback
model-index:
- name: arabicQA-finetuned-squad_arcd_manual_push
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# arabicQA-finetuned-squad_arcd_manual_push
This model is a fine-tuned version of [aubmindlab/araelectra-base-discriminator](https://huggingface.co/aubmindlab/araelectra-base-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.3885
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 3e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 3034, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 50, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 2.3885 | 0 |
### Framework versions
- Transformers 4.21.1
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CNT-UPenn/RoBERTa_for_seizureFrequency_QA | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-b0.05
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.4851
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-b0.05
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8144
- Bleu: 7.4851
- Gen Len: 44.7914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
CSResearcher/TestModel | [
"license:mit"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-test2
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.735
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-test2
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8127
- Bleu: 7.735
- Gen Len: 44.5453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
CSZay/bart | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-b0.1
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.497
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-b0.1
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8190
- Bleu: 7.497
- Gen Len: 44.5613
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
CTBC/ATS | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-b0.5
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.5091
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-b0.5
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8108
- Bleu: 7.5091
- Gen Len: 43.958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
CZWin32768/xlm-align | [
"pytorch",
"xlm-roberta",
"fill-mask",
"arxiv:2106.06381",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-b1
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.5172
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-b1
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7924
- Bleu: 7.5172
- Gen Len: 44.1886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Caddy/UD | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-b0.01
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.5421
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-b0.01
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8163
- Bleu: 7.5421
- Gen Len: 44.4902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Callidior/bert2bert-base-arxiv-titlegen | [
"pytorch",
"safetensors",
"encoder-decoder",
"text2text-generation",
"en",
"dataset:arxiv_dataset",
"transformers",
"summarization",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | summarization | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 145 | null | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-ks-padpt400
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ks-padpt400
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2218
- Accuracy: 0.6343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 256
- eval_batch_size: 256
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2948 | 1.0 | 50 | 1.6527 | 0.6108 |
| 0.8861 | 2.0 | 100 | 1.2653 | 0.6130 |
| 0.7809 | 3.0 | 150 | 1.2615 | 0.5924 |
| 0.7364 | 4.0 | 200 | 1.2218 | 0.6343 |
| 0.6944 | 5.0 | 250 | 1.2137 | 0.6324 |
| 0.6817 | 6.0 | 300 | 1.2822 | 0.5930 |
| 0.6601 | 7.0 | 350 | 1.3292 | 0.5599 |
| 0.6464 | 8.0 | 400 | 1.2744 | 0.5869 |
| 0.653 | 9.0 | 450 | 1.3916 | 0.5272 |
| 0.633 | 10.0 | 500 | 1.3344 | 0.5606 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.11.0+cu115
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Cameron/BERT-SBIC-offensive | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-cased-finetuned-fce
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-finetuned-fce
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9048 | 1.0 | 122 | 1.6691 |
| 1.6505 | 2.0 | 244 | 1.5172 |
| 1.5615 | 3.0 | 366 | 1.5019 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Cameron/BERT-SBIC-targetcategory | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
- quoref
- adversarial_qa
- duorc
model-index:
- name: rob-base-superqa2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 79.2365
verified: true
- name: F1
type: f1
value: 82.3326
verified: true
- task:
type: question-answering
name: Question Answering
dataset:
name: adversarial_qa
type: adversarial_qa
config: adversarialQA
split: test
metrics:
- name: Exact Match
type: exact_match
value: 12.4
verified: true
- name: F1
type: f1
value: 12.4
verified: true
- task:
type: question-answering
name: Question Answering
dataset:
name: adversarial_qa
type: adversarial_qa
config: adversarialQA
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 42.3667
verified: true
- name: F1
type: f1
value: 53.3255
verified: true
- task:
type: question-answering
name: Question Answering
dataset:
name: squad
type: squad
config: plain_text
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 86.1925
verified: true
- name: F1
type: f1
value: 92.4306
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rob-base-superqa2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0a0+gita4c10ee
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Cameron/BERT-eec-emotion | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 36 | null | ---
tags:
- generated_from_trainer
model-index:
- name: koBERT-finetuned-wholemasking20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# koBERT-finetuned-wholemasking20
This model is a fine-tuned version of [monologg/kobert](https://huggingface.co/monologg/kobert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 0.15 | 500 | 0.4369 |
| No log | 0.29 | 1000 | 0.4280 |
| No log | 0.44 | 1500 | 0.4214 |
| No log | 0.58 | 2000 | 0.4159 |
| No log | 0.73 | 2500 | 0.4144 |
| No log | 0.87 | 3000 | 0.4106 |
| 0.474 | 1.02 | 3500 | 0.4142 |
| 0.474 | 1.16 | 4000 | 0.4106 |
| 0.474 | 1.31 | 4500 | 0.4106 |
| 0.474 | 1.45 | 5000 | 0.4101 |
| 0.474 | 1.6 | 5500 | 0.4087 |
| 0.474 | 1.75 | 6000 | 0.4070 |
| 0.474 | 1.89 | 6500 | 0.4065 |
| 0.4122 | 2.04 | 7000 | 0.4088 |
| 0.4122 | 2.18 | 7500 | 0.4073 |
| 0.4122 | 2.33 | 8000 | 0.4058 |
| 0.4122 | 2.47 | 8500 | 0.4025 |
| 0.4122 | 2.62 | 9000 | 0.4032 |
| 0.4122 | 2.76 | 9500 | 0.4062 |
| 0.4122 | 2.91 | 10000 | 0.4059 |
| 0.4081 | 3.05 | 10500 | 0.4040 |
| 0.4081 | 3.2 | 11000 | 0.3993 |
| 0.4081 | 3.35 | 11500 | 0.3982 |
| 0.4081 | 3.49 | 12000 | 0.4041 |
| 0.4081 | 3.64 | 12500 | 0.4026 |
| 0.4081 | 3.78 | 13000 | 0.4009 |
| 0.4081 | 3.93 | 13500 | 0.4011 |
| 0.4041 | 4.07 | 14000 | 0.4001 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Cameron/BERT-mdgender-convai-binary | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-ks-padpt800
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ks-padpt800
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5281
- Accuracy: 0.6142
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 256
- eval_batch_size: 256
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.328 | 1.0 | 50 | 1.5281 | 0.6142 |
| 0.9328 | 2.0 | 100 | 1.3054 | 0.5853 |
| 0.8277 | 3.0 | 150 | 1.3858 | 0.4966 |
| 0.7689 | 4.0 | 200 | 1.4112 | 0.4975 |
| 0.7154 | 5.0 | 250 | 1.4042 | 0.5035 |
| 0.706 | 6.0 | 300 | 1.3635 | 0.5171 |
| 0.6878 | 7.0 | 350 | 1.4373 | 0.4873 |
| 0.6868 | 8.0 | 400 | 1.2890 | 0.5505 |
| 0.6705 | 9.0 | 450 | 1.3019 | 0.5405 |
| 0.6579 | 10.0 | 500 | 1.3337 | 0.5272 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.11.0+cu115
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Cameron/BERT-mdgender-convai-ternary | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 38 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9243589240600196
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2268
- Accuracy: 0.9245
- F1: 0.9244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8587 | 1.0 | 250 | 0.3353 | 0.899 | 0.8945 |
| 0.2657 | 2.0 | 500 | 0.2268 | 0.9245 | 0.9244 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Camzure/MaamiBot-test | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2022-08-17T05:48:30Z | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-b2
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.4786
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-b2
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7924
- Bleu: 7.4786
- Gen Len: 44.5778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Camzure/MaamiBot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-17T05:52:41Z | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-b10
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.1529
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-b10
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8165
- Bleu: 7.1529
- Gen Len: 45.5448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Canadiancaleb/DialoGPT-small-jesse | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-b20
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 6.6798
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-b20
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8652
- Bleu: 6.6798
- Gen Len: 46.8789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Canadiancaleb/DialoGPT-small-walter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-b50
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 5.0009
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-b50
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0682
- Bleu: 5.0009
- Gen Len: 50.7284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Canadiancaleb/jessebot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-17T05:57:15Z | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-b100
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 1.772
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-b100
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5393
- Bleu: 1.772
- Gen Len: 61.0825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Canyonevo/DialoGPT-medium-KingHenry | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-17T05:57:18Z | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-b5
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.3798
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-b5
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7945
- Bleu: 7.3798
- Gen Len: 44.7109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Capreolus/birch-bert-large-msmarco_mb | [
"pytorch",
"tf",
"jax",
"bert",
"next-sentence-prediction",
"transformers"
] | null | {
"architectures": [
"BertForNextSentencePrediction"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | 2022-08-17T06:31:07Z | ---
license: cc-by-nc-sa-3.0
---
# KhanomTan TTS v1.0
KhanomTan TTS (ขนมตาล) is an open-source Thai text-to-speech model that supports multilingual speakers such as Thai, English, and others.
KhanomTan TTS is a YourTTS model trained on multilingual languages that supports Thai. We use Thai speech corpora, TSync 1* and TSync 2* [mbarnig/lb-de-fr-en-pt-12800-TTS-CORPUS](https://huggingface.co/datasets/mbarnig/lb-de-fr-en-pt-12800-TTS-CORPUS) to train the YourTTS model by using code from [the 🐸 Coqui-TTS](https://github.com/coqui-ai/TTS).
### Config
We use Thai characters to the graphemes config to training the model and use the Speaker Encoder model from [🐸 Coqui-TTS](https://github.com/coqui-ai/TTS/releases/tag/speaker_encoder_model).
### Dataset
We use Tsync 1 and Tsync 2 corpora, which are not complete datasets, and then add these to [mbarnig/lb-de-fr-en-pt-12800-TTS-CORPUS](https://huggingface.co/datasets/mbarnig/lb-de-fr-en-pt-12800-TTS-CORPUS) dataset.
### Trained the model
We use the 🐸 Coqui-TTS multilingual VITS-model recipe (version 0.7.1 or the commit id is d46fbc240ccf21797d42ac26cb27eb0b9f8d31c4) for training the model, and we use the speaker encoder model from [🐸 Coqui-TTS](https://github.com/coqui-ai/TTS/releases/tag/speaker_encoder_model) then we release the best model to public access.
- Model cards: [https://github.com/wannaphong/KhanomTan-TTS-v1.0](https://github.com/wannaphong/KhanomTan-TTS-v1.0)
- Dataset (Tsync 1 and Tsync 2 only): [https://huggingface.co/datasets/wannaphong/tsync1-2-yourtts](https://huggingface.co/datasets/wannaphong/tsync1-2-yourtts)
- GitHub: [https://github.com/wannaphong/KhanomTan-TTS-v1.0](https://github.com/wannaphong/KhanomTan-TTS-v1.0)
*Note: Those are not complete corpus. We can access the public corpus only. |
Capreolus/electra-base-msmarco | [
"pytorch",
"tf",
"electra",
"text-classification",
"arxiv:2008.09093",
"transformers"
] | text-classification | {
"architectures": [
"ElectraForSequenceClassification"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 110 | null | ---
annotations_creators: []
language:
- ro
language_creators:
- machine-generated
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: BlackKakapo/t5-small-paraphrase-ro
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- text2text-generation
task_ids: []
---
# Romanian paraphrase

Fine-tune t5-small-paraphrase-ro model for paraphrase. Since there is no Romanian dataset for paraphrasing, I had to create my own [dataset](https://huggingface.co/datasets/BlackKakapo/paraphrase-ro-v2). The dataset contains ~30k examples.
### How to use
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("BlackKakapo/t5-small-paraphrase-ro-v2")
model = AutoModelForSeq2SeqLM.from_pretrained("BlackKakapo/t5-small-paraphrase-ro-v2")
```
### Or
```python
from transformers import T5ForConditionalGeneration, T5TokenizerFast
model = T5ForConditionalGeneration.from_pretrained("BlackKakapo/t5-small-paraphrase-ro-v2")
tokenizer = T5TokenizerFast.from_pretrained("BlackKakapo/t5-small-paraphrase-ro-v2")
```
### Generate
```python
text = "Am impresia că fac multe greșeli."
encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"], encoding["attention_mask"]
beam_outputs = model.generate(
input_ids=input_ids,
attention_mask=attention_masks,
do_sample=True,
max_length=256,
top_k=20,
top_p=0.9,
early_stopping=False,
num_return_sequences=5
)
final_outputs = []
for beam_output in beam_outputs:
text_para = tokenizer.decode(beam_output, skip_special_tokens=True,clean_up_tokenization_spaces=True)
if text.lower() != text_para.lower() or text not in final_outputs:
final_outputs.append(text_para)
print(final_outputs)
```
### Output
```out
['Am impresia că fac multe erori.']
``` |
Captain-1337/CrudeBERT | [
"pytorch",
"bert",
"text-classification",
"arxiv:1908.10063",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | 2022-08-17T06:37:49Z | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-ks-padpt1600
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ks-padpt1600
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6019
- Accuracy: 0.6111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 256
- eval_batch_size: 256
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3499 | 1.0 | 50 | 1.6019 | 0.6111 |
| 0.9698 | 2.0 | 100 | 1.4349 | 0.5613 |
| 0.866 | 3.0 | 150 | 1.4232 | 0.5547 |
| 0.8162 | 4.0 | 200 | 1.5573 | 0.4675 |
| 0.7632 | 5.0 | 250 | 1.4991 | 0.4950 |
| 0.7461 | 6.0 | 300 | 1.4251 | 0.5321 |
| 0.7374 | 7.0 | 350 | 1.6291 | 0.4247 |
| 0.7237 | 8.0 | 400 | 1.5307 | 0.4797 |
| 0.7273 | 9.0 | 450 | 1.5635 | 0.4520 |
| 0.7007 | 10.0 | 500 | 1.5841 | 0.4497 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.11.0+cu115
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CarlosPR/mt5-spanish-memmories-analysis | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-08-17T07:12:14Z | ---
language: en
tags:
- t5
datasets:
- squad
license: mit
---
# Question Generation Model
## Github
https://github.com/Seoneun/T5-Question-Generation
## Fine-tuning Dataset
SQuAD 1.1
| Train Data | Dev Data | Test Data |
| ------ | ------ | ------ |
| 75,722 | 10,570 | 11,877 |
## Demo
https://huggingface.co/Sehong/t5-large-QuestionGeneration
## How to use
```python
import torch
from transformers import PreTrainedTokenizerFast
from transformers import T5ForConditionalGeneration
tokenizer = PreTrainedTokenizerFast.from_pretrained('Sehong/t5-large-QuestionGeneration')
model = T5ForConditionalGeneration.from_pretrained('Sehong/t5-large-QuestionGeneration')
# tokenized
'''
text = "answer:Saint Bern ##ade ##tte So ##ubi ##rous content:Architectural ##ly , the school has a Catholic character . At ##op the Main Building ' s gold dome is a golden statue of the Virgin Mary . Immediately in front of the Main Building and facing it , is a copper statue of Christ with arms up ##rai ##sed with the legend "" V ##eni ##te Ad Me O ##m ##nes "" . Next to the Main Building is the Basilica of the Sacred Heart . Immediately behind the b ##asi ##lica is the G ##rot ##to , a Marian place of prayer and reflection . It is a replica of the g ##rot ##to at Lou ##rdes , France where the Virgin Mary reputed ##ly appeared to Saint Bern ##ade ##tte So ##ubi ##rous in 1858 . At the end of the main drive ( and in a direct line that connects through 3 statues and the Gold Dome ) , is a simple , modern stone statue of Mary ."
'''
text = "answer:Saint Bernadette Soubirous content:Architecturally , the school has a Catholic character . Atop the Main Building ' s gold dome is a golden statue of the Virgin Mary . Immediately in front of the Main Building and facing it , is a copper statue of Christ with arms upraised with the legend "" Venite Ad Me Omnes "" . Next to the Main Building is the Basilica of the Sacred Heart . Immediately behind the basilica is the Grotto , a Marian place of prayer and reflection . It is a replica of the grotto at Lourdes , France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858 . At the end of the main drive ( and in a direct line that connects through 3 statues and the Gold Dome ) , is a simple , modern stone statue of Mary ."
raw_input_ids = tokenizer.encode(text)
input_ids = [tokenizer.bos_token_id] + raw_input_ids + [tokenizer.eos_token_id]
question_ids = model.generate(torch.tensor([input_ids]))
decode = tokenizer.decode(question_ids.squeeze().tolist(), skip_special_tokens=True)
decode = decode.replace(' # # ', '').replace(' ', ' ').replace(' ##', '')
print(decode)
```
## Evalutation
| BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 | METEOR | ROUGE-L |
| ------ | ------ | ------ | ------ | ------ | ------- |
| 51.333 | 36.742 | 28.218 | 22.289 | 26.126 | 51.069 | |
CarlosTron/Yo | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: de
widget:
- text: "[Title_nullsechsroy feat. YFG Pave_"
tags:
- Text Generation
datasets:
- genius lyrics
license: mit
---
# GPT-Rapgenerator
The Rapgenerator is trained for [nullsechsroy](https://genius.com/artists/Nullsechsroy) on [german-poetry-gpt2](https://huggingface.co/Anjoe/german-poetry-gpt2) for 20 epochs.
We used the [genius](https://docs.genius.com/#/songs-h2) songlyrics from the following artists:
['Ace Tee', 'Aligatoah', 'AnnenMayKantereit', 'Apache 207', 'Azad', 'Badmómzjay', 'Bausa', 'Blumentopf', 'Blumio', 'Capital Bra', 'Casper', 'Celo & Abdi', 'Cro', 'Dardan', 'Dendemann', 'Die P', 'Dondon', 'Dynamite Deluxe', 'Edgar Wasser', 'Eko Fresh', 'Farid Bang', 'Favorite', 'Genetikk', 'Haftbefehl', 'Haiyti', 'Huss und Hodn', 'Jamule', 'Jamule', 'Juju', 'Kasimir1441', 'Katja Krasavice', 'Kay One', 'Kitty Kat', 'Kool Savas', 'LX & Maxwell', 'Leila Akinyi', 'Loredana', 'Loredana & Mozzik', 'Luciano', 'Marsimoto', 'Marteria', 'Morlockk Dilemma', 'Moses Pelham', 'Nimo', 'NullSechsRoy', 'Prinz Pi', 'SSIO', 'SXTN', 'Sabrina Setlur', 'Samy Deluxe', 'Sanito', 'Sebastian Fitzek', 'Shirin David', 'Summer Cem', 'T-Low', 'Ufo361', 'YBRE', 'YFG Pave']
# Example song structure
```
[Title_nullsechsroy_Goodies]
[Part 1_nullsechsroy_Goodies]
Soulja Boy – „Pretty Boy Swag“
Heute bei ihr, aber morgen schon weg, ja
..
[Hook_nullsechsroy_Goodies]
Ich hab' Jungs in der Trap, ich hab' Jungs an der Uni (Ahh)
...
[Part 2_nullsechsroy_Goodies]
Ja, Soulja Boy – „Pretty Boy Swag“
...
[Hook_nullsechsroy_Goodies]
Ich hab' Jungs in der Trap, ich hab' Jungs an der Uni (Ahh)
...
[Post-Hook_nullsechsroy_Goodies]
Ja, ich weiß, sie findet niemals ein'n wie mich (Ahh)
...
```
# Source code to create a song
```
from transformers import pipeline, AutoTokenizer,AutoModelForCausalLM
# load the model from huggingface
rap_model = AutoModelForCausalLM.from_pretrained("Bachstelze/poetryRapGPT")
tokenizer = AutoTokenizer.from_pretrained("Anjoe/german-poetry-gpt2")
rap_pipe = pipeline('text-generation',
model=rap_model,
tokenizer=german_gpt_model,
pad_token_id=tokenizer.eos_token_id,
max_length=250)
# set the artist
song_artist = "nullsechsroy" # "nullsechsroy Deluxe"
# add a title idea or leave it blank
title = "" # "Kristall" "Fit"
# definition of the song structure
type_with_linenumbers = [("Intro",4),
("Hook",4),
("Part 1",6),
("Part 2",6),
("Outro",4)]
def set_title(song_parts):
"""
we create a title if it isn't set already
and add the title to the songs parts dictionary
"""
if len(title) > 0:
song_parts["Title"] = "\n[Title_" + song_artist + "_" + title + "]\n"
song_parts["artist_with_title"] = song_artist + "_" + title
else:
title_input = "\n[Title_" + song_artist + "_"
title_lines = rap_pipe(title_input)[0]['generated_text']
index_title_end = title_lines.index("]\n")
artist_with_title = title_lines[8:index_title_end]
song_parts["Title"] = title_lines[:index_title_end+1]
song_parts["artist_with_title"] = artist_with_title
def create_song_by_parts():
"""
we iterate over the song structure
and return the dictionary with the song parts
"""
song_parts = {}
set_title(song_parts)
for (part_type, line_number) in type_with_linenumbers:
new_song_part = create_song_part(part_type, song_parts["artist_with_title"], line_number)
song_parts[part_type] = new_song_part
return song_parts
def get_line(pipe_input, line_number):
"""
We generate a new song line.
This function could be scaled to more lines.
"""
new_lines = rap_pipe(pipe_input)[0]['generated_text'].split("\n")
if len(new_lines) > line_number + 3:
new_line = new_lines[line_number+3] + "\n"
return new_line
else: #retry
return get_line(pipe_input, line_number)
def create_song_part(part_type, artist_with_title, lines_number):
"""
we generate one song part
"""
start_type = "\n["+part_type+"_"+artist_with_title+"]\n"
song_part = start_type # + preset start line
lines = [""]
for line_number in range(lines_number):
pipe_input = start_type + lines[-1]
new_line = get_line(pipe_input, line_number)
lines.append(new_line)
song_part += new_line
return song_part
def print_song(song_parts):
"""
Let's print the generated song
"""
print(song_parts["Title"])
print(song_parts["Intro"])
print(song_parts["Part 1"])
print(song_parts["Hook"])
print(song_parts["Part 2"])
print(song_parts["Hook"])
print(song_parts["Outro"])
# start the generation of one song
song_parts = create_song_by_parts()
print_song(song_parts)
``` |
CasualHomie/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | 2022-08-17T07:29:21Z | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-ks-padpt3200
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ks-padpt3200
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2818
- Accuracy: 0.6200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 256
- eval_batch_size: 256
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3802 | 1.0 | 50 | 1.5035 | 0.6121 |
| 1.0153 | 2.0 | 100 | 1.2818 | 0.6200 |
| 0.9105 | 3.0 | 150 | 1.3827 | 0.5380 |
| 0.8535 | 4.0 | 200 | 1.3513 | 0.5587 |
| 0.7982 | 5.0 | 250 | 1.4749 | 0.5068 |
| 0.7754 | 6.0 | 300 | 1.5109 | 0.5025 |
| 0.749 | 7.0 | 350 | 1.6198 | 0.4476 |
| 0.7497 | 8.0 | 400 | 1.5480 | 0.4850 |
| 0.7386 | 9.0 | 450 | 1.6052 | 0.4665 |
| 0.7185 | 10.0 | 500 | 1.6085 | 0.4734 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.11.0+cu115
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Cathy/reranking_model | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- embedding-data/sentence-compression
---
# edumunozsala/distilroberta-sentence-transformer-test
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('edumunozsala/distilroberta-sentence-transformer-test')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('edumunozsala/distilroberta-sentence-transformer-test')
model = AutoModel.from_pretrained('edumunozsala/distilroberta-sentence-transformer-test')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=edumunozsala/distilroberta-sentence-transformer-test)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1125 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 337,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Cedille/fr-boris | [
"pytorch",
"gptj",
"text-generation",
"fr",
"dataset:c4",
"arxiv:2202.03371",
"transformers",
"causal-lm",
"license:mit",
"has_space"
] | text-generation | {
"architectures": [
"GPTJForCausalLM"
],
"model_type": "gptj",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 401 | 2022-08-17T07:43:29Z | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-b0.03
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.4044
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-b0.03
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8327
- Bleu: 7.4044
- Gen Len: 44.8759
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dccuchile/albert-base-spanish-finetuned-mldoc | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 34 | null | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-b0.04
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.5994
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-b0.04
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8124
- Bleu: 7.5994
- Gen Len: 44.6753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dccuchile/albert-base-spanish-finetuned-ner | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-b0.75
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.4601
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-b0.75
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8013
- Bleu: 7.4601
- Gen Len: 44.2356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dccuchile/albert-base-spanish-finetuned-pawsx | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | 2022-08-17T07:48:27Z | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-b1.25
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.5563
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-b1.25
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7945
- Bleu: 7.5563
- Gen Len: 44.1141
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dccuchile/albert-base-spanish-finetuned-pos | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2022-08-17T07:48:45Z | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-b1.5
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.5422
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-b1.5
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7938
- Bleu: 7.5422
- Gen Len: 44.3267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dccuchile/albert-large-spanish-finetuned-ner | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2022-08-17T08:27:19Z | jeremy sits and reads an imaginary book even though jeremy is actually the imaginary friend of a horse ghost |
dccuchile/albert-large-spanish-finetuned-pawsx | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | null | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-ks-ept4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ks-ept4
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5663
- Accuracy: 0.6209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 256
- eval_batch_size: 256
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5133 | 1.0 | 50 | 1.5663 | 0.6209 |
| 1.4819 | 2.0 | 100 | 1.5675 | 0.6169 |
| 1.4082 | 3.0 | 150 | 1.5372 | 0.5802 |
| 1.3536 | 4.0 | 200 | 1.6716 | 0.5338 |
| 1.296 | 5.0 | 250 | 1.7601 | 0.5399 |
| 1.3053 | 6.0 | 300 | 1.6778 | 0.5630 |
| 1.2734 | 7.0 | 350 | 1.6554 | 0.5734 |
| 1.2837 | 8.0 | 400 | 1.7338 | 0.5741 |
| 1.2682 | 9.0 | 450 | 1.7313 | 0.5774 |
| 1.2776 | 10.0 | 500 | 1.7083 | 0.5791 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.11.0+cu115
- Datasets 2.4.0
- Tokenizers 0.12.1
|
dccuchile/albert-tiny-spanish-finetuned-ner | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
model-index:
- name: nils-nl-to-rx-pt-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nils-nl-to-rx-pt-v3
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8061 | 1.0 | 500 | 0.5023 |
| 0.6521 | 2.0 | 1000 | 0.3094 |
| 0.5033 | 3.0 | 1500 | 0.2751 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
dccuchile/albert-xxlarge-spanish-finetuned-pos | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | Access to model Thantiwa/Thaitune_eiei is restricted and you are not in the authorized list. Visit https://huggingface.co/Thantiwa/Thaitune_eiei to ask for access. |
dccuchile/bert-base-spanish-wwm-cased-finetuned-qa-mlqa | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/apesahoy-discoelysiumbot-jzux/1660737778768/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1196519479364268034/5QpniWSP_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1384356575410675713/xQvAaofk_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1304589362051645441/Yo_o5yi5_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Humongous Ape MP & disco elysium quotes & trash jones</div>
<div style="text-align: center; font-size: 14px;">@apesahoy-discoelysiumbot-jzux</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Humongous Ape MP & disco elysium quotes & trash jones.
| Data | Humongous Ape MP | disco elysium quotes | trash jones |
| --- | --- | --- | --- |
| Tweets downloaded | 3246 | 3250 | 3233 |
| Retweets | 198 | 0 | 615 |
| Short tweets | 610 | 20 | 280 |
| Tweets kept | 2438 | 3230 | 2338 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/28ibo0tz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @apesahoy-discoelysiumbot-jzux's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2kccyxxh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2kccyxxh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/apesahoy-discoelysiumbot-jzux')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
dccuchile/distilbert-base-spanish-uncased-finetuned-mldoc | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | 2022-08-17T12:22:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-small-finetuned-wnut17-ner-longer10
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: train
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5546995377503852
- name: Recall
type: recall
value: 0.430622009569378
- name: F1
type: f1
value: 0.48484848484848486
- name: Accuracy
type: accuracy
value: 0.9250487441220323
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-finetuned-wnut17-ner-longer10
This model is a fine-tuned version of [muhtasham/bert-small-finetuned-wnut17-ner-longer6](https://huggingface.co/muhtasham/bert-small-finetuned-wnut17-ner-longer6) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4693
- Precision: 0.5547
- Recall: 0.4306
- F1: 0.4848
- Accuracy: 0.9250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 425 | 0.4815 | 0.5759 | 0.3947 | 0.4684 | 0.9255 |
| 0.0402 | 2.0 | 850 | 0.4467 | 0.5397 | 0.4390 | 0.4842 | 0.9247 |
| 0.0324 | 3.0 | 1275 | 0.4646 | 0.5332 | 0.4318 | 0.4772 | 0.9244 |
| 0.0315 | 4.0 | 1700 | 0.4693 | 0.5547 | 0.4306 | 0.4848 | 0.9250 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Chae/botman | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2022-08-17T13:17:39Z | ---
language:
- en
---
# Maverick <br>
Developed during my internship at [**Vela Partners**](https://vela.partners/) as a Machine Learning Engineer. <br>
The paper presenting Maverick can be found on my [GitHub](https://github.com/lukasec/Maverick). <br>
Maverick consists of two sub-models published here on Hugging Face : [MAV-Moneyball](https://huggingface.co/lukasec/Maverick-Moneyball) & [MAV-Midas](https://huggingface.co/lukasec/Maverick-Midas)
**Abstract** <br>
Maverick (MAV) is an AI-enabled algorithm to guide Venture Capital investment by leveraging BERT - the state-of-the-art deep learning model for NLP. Its ultimate goal is to predict the success of early-stage start-ups.
In Venture Capital (VC) there are two types of successful start-ups: those that replace existing incumbents (type 1), and those that create new markets (type 2). In order to predict the success of a start-up with respect to both types, Maverick consists of two models:
* [**MAV-Moneyball:**](https://huggingface.co/lukasec/Maverick-Moneyball) predicts success of early stage start-ups of type 1.
* [**MAV-Midas:**](https://huggingface.co/lukasec/Maverick-Midas) predicts whether a start-up fits current investment trends made by the most successful brand and long-tail investors, thereby taking into account new emerging markets that do not necessarily already have established successful start-ups leading them - ie. start-ups of type 2.<br><br>
Maverick is developed through a transfer learning approach, by fine-tuning a pre-trained BERT model for type 1 and type 2 classification. Notably, both MAV-Moneyball and MAV-Midas achieve a true positive ratio greater than 70%, which in the context of VC investment is one of the most important evaluation criteria - it is the percentage of successful companies predicted to be successful by Maverick.
|
Cheatham/xlm-roberta-large-finetuned-r01 | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 23 | 2022-08-17T15:10:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3619
- Precision: 0.7737
- Recall: 0.7568
- F1: 0.7651
- Accuracy: 0.8876
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3965 | 1.0 | 6529 | 0.3917 | 0.7565 | 0.7324 | 0.7442 | 0.8791 |
| 0.361 | 2.0 | 13058 | 0.3706 | 0.7765 | 0.7453 | 0.7606 | 0.8859 |
| 0.3397 | 3.0 | 19587 | 0.3619 | 0.7737 | 0.7568 | 0.7651 | 0.8876 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Ci/Pai | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-17T20:01:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: train_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_model
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0825
- Wer: 0.9077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.6984 | 11.11 | 500 | 3.1332 | 1.0 |
| 2.4775 | 22.22 | 1000 | 1.0825 | 0.9077 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Cinnamon/electra-small-japanese-generator | [
"pytorch",
"electra",
"fill-mask",
"ja",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"ElectraForMaskedLM"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 19 | null | ---
license: apache-2.0
language: code
datasets:
- codeparrot/codecomplex
---
This is a fine-tuned version of [UniXcoder](https://huggingface.co/microsoft/unixcoder-base-nine), a unified cross-modal pre-trained model for programming languages, on [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex), a dataset for complexity prediction of Java code. You can also find the code for the fine-tuning in this [repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/codeparrot/examples) |
CoShin/XLM-roberta-large_ko_en_nil_sts | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-18T00:55:49Z | ---
tags:
- conversational
---
# Spike Spiegel DialoGPT Model |
Craig/mGqFiPhu | [
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | feature-extraction | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-20T17:11:20Z | ---
tags:
- generated_from_trainer
model-index:
- name: chinese-pert-large-finetuned-med-zh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-pert-large-finetuned-med-zh
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9428 | 1.0 | 14081 | 10.3783 |
| 0.8847 | 2.0 | 28162 | 6.8072 |
| 0.8689 | 3.0 | 42243 | 1.3781 |
| 0.8592 | 4.0 | 56324 | 5.5274 |
| 0.8734 | 5.0 | 70405 | 1.4370 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.8.0+cu111
- Datasets 2.4.0
- Tokenizers 0.10.3
|
DTAI-KULeuven/mbert-corona-tweets-belgium-topics | [
"pytorch",
"jax",
"bert",
"text-classification",
"multilingual",
"nl",
"fr",
"en",
"arxiv:2104.09947",
"transformers",
"Dutch",
"French",
"English",
"Tweets",
"Topic classification"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 167 | null | This is random-wav2vec2-base, an unpretrained version of wav2vec 2.0. The weight of this model is randomly initialized, and can be used for establishing randomized baselines or training a model from scratch. The code used to do so is adapted from: https://huggingface.co/saibo/random-roberta-base. |
DaWang/demo | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: false
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
---
# Stable Diffusion v1-3 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion with D🧨iffusers blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-3** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
For more information, please refer to [Training](#training).
This weights here are intended to be used with the D🧨iffusers library. If you are looking for the weights to be loaded into the CompVis Stable Diffusion codebase, [come here](https://huggingface.co/CompVis/stable-diffusion-v-1-3-original)
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## Examples
We recommend using [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion.
```bash
pip install --upgrade diffusers transformers scipy
```
Running the pipeline with the default PNDM scheduler:
```python
import torch
from torch import autocast
from diffusers import StableDiffusionPipeline
model_id = "CompVis/stable-diffusion-v1-3"
device = "cuda"
pipe = StableDiffusionPipeline.from_pretrained(model_id)
pipe = pipe.to(device)
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5)["sample"][0]
image.save("astronaut_rides_horse.png")
```
**Note**:
If you are limited by GPU memory and have less than 10GB of GPU RAM available, please make sure to load the StableDiffusionPipeline in float16 precision instead of the default float32 precision as done above. You can do so by telling diffusers to expect the weights to be in float16 precision:
```py
import torch
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to(device)
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5)["sample"][0]
image.save("astronaut_rides_horse.png")
```
To swap out the noise scheduler, pass it to `from_pretrained`:
```python
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
model_id = "CompVis/stable-diffusion-v1-3"
# Use the K-LMS scheduler here instead
scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, use_auth_token=True)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5)["sample"][0]
image.save("astronaut_rides_horse.png")
```
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
## Training
### Training Data
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
### Training Procedure
Stable Diffusion v1-4 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
We currently provide four checkpoints, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [**`stable-diffusion-v1-4`**](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2`.225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
### Training details
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).* |
Daiki/scibert_scivocab_uncased-finetuned-cola | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finetuned-marktextepoch-n800
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-marktextepoch-n800
This model is a fine-tuned version of [leokai/finetuned-marktextepoch-n600](https://huggingface.co/leokai/finetuned-marktextepoch-n600) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.287 | 1.0 | 1606 | 2.8473 |
| 0.2913 | 2.0 | 3212 | 2.8147 |
| 0.2865 | 3.0 | 4818 | 2.8809 |
| 0.2947 | 4.0 | 6424 | 2.8510 |
| 0.2988 | 5.0 | 8030 | 2.8883 |
| 0.3109 | 6.0 | 9636 | 2.9016 |
| 0.309 | 7.0 | 11242 | 2.8869 |
| 0.301 | 8.0 | 12848 | 2.9201 |
| 0.303 | 9.0 | 14454 | 2.8902 |
| 0.3156 | 10.0 | 16060 | 2.8888 |
| 0.3132 | 11.0 | 17666 | 2.8777 |
| 0.3089 | 12.0 | 19272 | 2.9429 |
| 0.3146 | 13.0 | 20878 | 2.9131 |
| 0.3297 | 14.0 | 22484 | 2.8983 |
| 0.3214 | 15.0 | 24090 | 2.9321 |
| 0.3095 | 16.0 | 25696 | 2.9436 |
| 0.3171 | 17.0 | 27302 | 2.9163 |
| 0.308 | 18.0 | 28908 | 2.9545 |
| 0.3174 | 19.0 | 30514 | 2.9161 |
| 0.3163 | 20.0 | 32120 | 2.9081 |
| 0.3191 | 21.0 | 33726 | 2.9465 |
| 0.3254 | 22.0 | 35332 | 2.9404 |
| 0.3168 | 23.0 | 36938 | 2.9054 |
| 0.33 | 24.0 | 38544 | 2.9274 |
| 0.3115 | 25.0 | 40150 | 2.9277 |
| 0.3125 | 26.0 | 41756 | 2.9627 |
| 0.3246 | 27.0 | 43362 | 2.9583 |
| 0.3133 | 28.0 | 44968 | 2.9433 |
| 0.3221 | 29.0 | 46574 | 2.9747 |
| 0.3185 | 30.0 | 48180 | 2.9793 |
| 0.3123 | 31.0 | 49786 | 2.9170 |
| 0.3169 | 32.0 | 51392 | 2.9711 |
| 0.3175 | 33.0 | 52998 | 2.9457 |
| 0.3253 | 34.0 | 54604 | 2.9518 |
| 0.3163 | 35.0 | 56210 | 2.9218 |
| 0.3113 | 36.0 | 57816 | 2.9524 |
| 0.3208 | 37.0 | 59422 | 2.9570 |
| 0.3197 | 38.0 | 61028 | 2.9439 |
| 0.3213 | 39.0 | 62634 | 2.9416 |
| 0.3259 | 40.0 | 64240 | 2.9884 |
| 0.3216 | 41.0 | 65846 | 2.9641 |
| 0.3154 | 42.0 | 67452 | 2.9797 |
| 0.3258 | 43.0 | 69058 | 2.9813 |
| 0.3236 | 44.0 | 70664 | 2.9700 |
| 0.3134 | 45.0 | 72270 | 2.9881 |
| 0.3219 | 46.0 | 73876 | 2.9982 |
| 0.3243 | 47.0 | 75482 | 2.9702 |
| 0.3246 | 48.0 | 77088 | 2.9706 |
| 0.3245 | 49.0 | 78694 | 2.9965 |
| 0.3124 | 50.0 | 80300 | 2.9893 |
| 0.3172 | 51.0 | 81906 | 2.9859 |
| 0.3118 | 52.0 | 83512 | 2.9707 |
| 0.3187 | 53.0 | 85118 | 2.9771 |
| 0.3256 | 54.0 | 86724 | 2.9827 |
| 0.3222 | 55.0 | 88330 | 2.9776 |
| 0.3212 | 56.0 | 89936 | 2.9607 |
| 0.3215 | 57.0 | 91542 | 2.9664 |
| 0.3266 | 58.0 | 93148 | 2.9638 |
| 0.3209 | 59.0 | 94754 | 2.9842 |
| 0.333 | 60.0 | 96360 | 3.0053 |
| 0.3202 | 61.0 | 97966 | 2.9833 |
| 0.3155 | 62.0 | 99572 | 2.9952 |
| 0.32 | 63.0 | 101178 | 2.9737 |
| 0.3291 | 64.0 | 102784 | 2.9804 |
| 0.3259 | 65.0 | 104390 | 2.9767 |
| 0.32 | 66.0 | 105996 | 2.9610 |
| 0.3208 | 67.0 | 107602 | 3.0111 |
| 0.3277 | 68.0 | 109208 | 2.9588 |
| 0.337 | 69.0 | 110814 | 2.9920 |
| 0.3296 | 70.0 | 112420 | 2.9466 |
| 0.3197 | 71.0 | 114026 | 2.9619 |
| 0.323 | 72.0 | 115632 | 2.9733 |
| 0.3247 | 73.0 | 117238 | 2.9787 |
| 0.3246 | 74.0 | 118844 | 2.9383 |
| 0.3203 | 75.0 | 120450 | 3.0123 |
| 0.3272 | 76.0 | 122056 | 3.0284 |
| 0.3407 | 77.0 | 123662 | 3.0047 |
| 0.3312 | 78.0 | 125268 | 2.9465 |
| 0.3262 | 79.0 | 126874 | 2.9805 |
| 0.3221 | 80.0 | 128480 | 2.9713 |
| 0.3246 | 81.0 | 130086 | 2.9869 |
| 0.3208 | 82.0 | 131692 | 2.9970 |
| 0.3196 | 83.0 | 133298 | 2.9864 |
| 0.3311 | 84.0 | 134904 | 3.0080 |
| 0.3235 | 85.0 | 136510 | 2.9739 |
| 0.3251 | 86.0 | 138116 | 2.9749 |
| 0.3248 | 87.0 | 139722 | 2.9588 |
| 0.3342 | 88.0 | 141328 | 2.9509 |
| 0.3456 | 89.0 | 142934 | 2.9713 |
| 0.3337 | 90.0 | 144540 | 2.9968 |
| 0.323 | 91.0 | 146146 | 2.9790 |
| 0.3202 | 92.0 | 147752 | 2.9919 |
| 0.3308 | 93.0 | 149358 | 3.0100 |
| 0.3232 | 94.0 | 150964 | 2.9873 |
| 0.3356 | 95.0 | 152570 | 2.9786 |
| 0.3282 | 96.0 | 154176 | 2.9965 |
| 0.3404 | 97.0 | 155782 | 3.0198 |
| 0.3212 | 98.0 | 157388 | 2.9713 |
| 0.3307 | 99.0 | 158994 | 2.9979 |
| 0.337 | 100.0 | 160600 | 2.9805 |
| 0.3354 | 101.0 | 162206 | 2.9759 |
| 0.3252 | 102.0 | 163812 | 2.9810 |
| 0.3324 | 103.0 | 165418 | 2.9433 |
| 0.3278 | 104.0 | 167024 | 3.0079 |
| 0.3419 | 105.0 | 168630 | 2.9576 |
| 0.343 | 106.0 | 170236 | 2.9610 |
| 0.3294 | 107.0 | 171842 | 2.9147 |
| 0.3271 | 108.0 | 173448 | 2.9740 |
| 0.3315 | 109.0 | 175054 | 2.9736 |
| 0.3413 | 110.0 | 176660 | 2.9819 |
| 0.3344 | 111.0 | 178266 | 2.9783 |
| 0.3399 | 112.0 | 179872 | 2.9836 |
| 0.3314 | 113.0 | 181478 | 2.9605 |
| 0.3344 | 114.0 | 183084 | 2.9629 |
| 0.3346 | 115.0 | 184690 | 2.9535 |
| 0.3324 | 116.0 | 186296 | 2.9139 |
| 0.3493 | 117.0 | 187902 | 2.9383 |
| 0.341 | 118.0 | 189508 | 2.9547 |
| 0.3414 | 119.0 | 191114 | 2.9592 |
| 0.335 | 120.0 | 192720 | 2.9822 |
| 0.3423 | 121.0 | 194326 | 2.9498 |
| 0.3415 | 122.0 | 195932 | 2.9371 |
| 0.3557 | 123.0 | 197538 | 2.9625 |
| 0.3544 | 124.0 | 199144 | 2.9637 |
| 0.3528 | 125.0 | 200750 | 2.9881 |
| 0.3567 | 126.0 | 202356 | 2.9576 |
| 0.3336 | 127.0 | 203962 | 2.9427 |
| 0.3282 | 128.0 | 205568 | 2.9659 |
| 0.3605 | 129.0 | 207174 | 2.9555 |
| 0.3436 | 130.0 | 208780 | 2.9590 |
| 0.3489 | 131.0 | 210386 | 2.9250 |
| 0.3604 | 132.0 | 211992 | 2.9411 |
| 0.347 | 133.0 | 213598 | 2.9093 |
| 0.3623 | 134.0 | 215204 | 2.9324 |
| 0.3449 | 135.0 | 216810 | 2.9564 |
| 0.3459 | 136.0 | 218416 | 2.9254 |
| 0.3519 | 137.0 | 220022 | 2.9512 |
| 0.3499 | 138.0 | 221628 | 2.9411 |
| 0.3588 | 139.0 | 223234 | 2.8994 |
| 0.3657 | 140.0 | 224840 | 2.9372 |
| 0.3564 | 141.0 | 226446 | 2.9237 |
| 0.3445 | 142.0 | 228052 | 2.9380 |
| 0.359 | 143.0 | 229658 | 2.9547 |
| 0.3495 | 144.0 | 231264 | 2.9238 |
| 0.3545 | 145.0 | 232870 | 2.9436 |
| 0.3523 | 146.0 | 234476 | 2.9390 |
| 0.3785 | 147.0 | 236082 | 2.8861 |
| 0.356 | 148.0 | 237688 | 2.9239 |
| 0.3624 | 149.0 | 239294 | 2.8960 |
| 0.3619 | 150.0 | 240900 | 2.9224 |
| 0.3607 | 151.0 | 242506 | 2.9155 |
| 0.3585 | 152.0 | 244112 | 2.9144 |
| 0.3735 | 153.0 | 245718 | 2.8805 |
| 0.3534 | 154.0 | 247324 | 2.9095 |
| 0.3667 | 155.0 | 248930 | 2.8888 |
| 0.3705 | 156.0 | 250536 | 2.9049 |
| 0.3711 | 157.0 | 252142 | 2.8801 |
| 0.3633 | 158.0 | 253748 | 2.8874 |
| 0.36 | 159.0 | 255354 | 2.8984 |
| 0.3752 | 160.0 | 256960 | 2.9004 |
| 0.3717 | 161.0 | 258566 | 2.8577 |
| 0.3742 | 162.0 | 260172 | 2.8772 |
| 0.3815 | 163.0 | 261778 | 2.9183 |
| 0.3695 | 164.0 | 263384 | 2.9144 |
| 0.3809 | 165.0 | 264990 | 2.8968 |
| 0.3813 | 166.0 | 266596 | 2.8690 |
| 0.3803 | 167.0 | 268202 | 2.8748 |
| 0.3813 | 168.0 | 269808 | 2.8676 |
| 0.3782 | 169.0 | 271414 | 2.8473 |
| 0.3848 | 170.0 | 273020 | 2.8816 |
| 0.371 | 171.0 | 274626 | 2.8929 |
| 0.3843 | 172.0 | 276232 | 2.8858 |
| 0.381 | 173.0 | 277838 | 2.8590 |
| 0.3889 | 174.0 | 279444 | 2.8484 |
| 0.3814 | 175.0 | 281050 | 2.8634 |
| 0.3865 | 176.0 | 282656 | 2.8713 |
| 0.3968 | 177.0 | 284262 | 2.8490 |
| 0.4007 | 178.0 | 285868 | 2.8497 |
| 0.3805 | 179.0 | 287474 | 2.8435 |
| 0.3903 | 180.0 | 289080 | 2.8582 |
| 0.392 | 181.0 | 290686 | 2.8473 |
| 0.3926 | 182.0 | 292292 | 2.8584 |
| 0.3921 | 183.0 | 293898 | 2.8850 |
| 0.3958 | 184.0 | 295504 | 2.8532 |
| 0.3858 | 185.0 | 297110 | 2.8568 |
| 0.4002 | 186.0 | 298716 | 2.7939 |
| 0.3999 | 187.0 | 300322 | 2.8548 |
| 0.3932 | 188.0 | 301928 | 2.8598 |
| 0.4005 | 189.0 | 303534 | 2.8390 |
| 0.4048 | 190.0 | 305140 | 2.8336 |
| 0.3983 | 191.0 | 306746 | 2.8286 |
| 0.394 | 192.0 | 308352 | 2.8437 |
| 0.3989 | 193.0 | 309958 | 2.8594 |
| 0.3966 | 194.0 | 311564 | 2.8541 |
| 0.397 | 195.0 | 313170 | 2.8697 |
| 0.4007 | 196.0 | 314776 | 2.8549 |
| 0.3978 | 197.0 | 316382 | 2.8815 |
| 0.4005 | 198.0 | 317988 | 2.8565 |
| 0.4025 | 199.0 | 319594 | 2.8451 |
| 0.4078 | 200.0 | 321200 | 2.8433 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Davlan/xlm-roberta-base-finetuned-yoruba | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-korean-demo-test2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-korean-demo-test2
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0566
- Wer: 0.5224
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 31.2541 | 0.3 | 400 | 5.4002 | 1.0 |
| 4.9419 | 0.59 | 800 | 5.3336 | 1.0 |
| 4.8926 | 0.89 | 1200 | 5.0531 | 1.0 |
| 4.7218 | 1.19 | 1600 | 4.5172 | 1.0 |
| 4.0218 | 1.49 | 2000 | 3.1418 | 0.9518 |
| 3.0654 | 1.78 | 2400 | 2.4376 | 0.9041 |
| 2.6226 | 2.08 | 2800 | 2.0151 | 0.8643 |
| 2.2944 | 2.38 | 3200 | 1.8025 | 0.8290 |
| 2.1872 | 2.67 | 3600 | 1.6469 | 0.7962 |
| 2.0747 | 2.97 | 4000 | 1.5165 | 0.7714 |
| 1.8479 | 3.27 | 4400 | 1.4281 | 0.7694 |
| 1.8288 | 3.57 | 4800 | 1.3791 | 0.7326 |
| 1.801 | 3.86 | 5200 | 1.3328 | 0.7177 |
| 1.6723 | 4.16 | 5600 | 1.2954 | 0.7192 |
| 1.5925 | 4.46 | 6000 | 1.3137 | 0.6953 |
| 1.5709 | 4.75 | 6400 | 1.2086 | 0.6973 |
| 1.5294 | 5.05 | 6800 | 1.1811 | 0.6730 |
| 1.3844 | 5.35 | 7200 | 1.2053 | 0.6769 |
| 1.3906 | 5.65 | 7600 | 1.1287 | 0.6556 |
| 1.4088 | 5.94 | 8000 | 1.1251 | 0.6466 |
| 1.2989 | 6.24 | 8400 | 1.1577 | 0.6546 |
| 1.2523 | 6.54 | 8800 | 1.0643 | 0.6377 |
| 1.2651 | 6.84 | 9200 | 1.0865 | 0.6417 |
| 1.2209 | 7.13 | 9600 | 1.0981 | 0.6272 |
| 1.1435 | 7.43 | 10000 | 1.1195 | 0.6317 |
| 1.1616 | 7.73 | 10400 | 1.0672 | 0.6327 |
| 1.1272 | 8.02 | 10800 | 1.0413 | 0.6248 |
| 1.043 | 8.32 | 11200 | 1.0555 | 0.6233 |
| 1.0523 | 8.62 | 11600 | 1.0372 | 0.6178 |
| 1.0208 | 8.92 | 12000 | 1.0170 | 0.6128 |
| 0.9895 | 9.21 | 12400 | 1.0354 | 0.5934 |
| 0.95 | 9.51 | 12800 | 1.1019 | 0.6039 |
| 0.9705 | 9.81 | 13200 | 1.0229 | 0.5855 |
| 0.9202 | 10.1 | 13600 | 1.0364 | 0.5919 |
| 0.8644 | 10.4 | 14000 | 1.0721 | 0.5984 |
| 0.8641 | 10.7 | 14400 | 1.0383 | 0.5905 |
| 0.8924 | 11.0 | 14800 | 0.9947 | 0.5760 |
| 0.7914 | 11.29 | 15200 | 1.0270 | 0.5885 |
| 0.7882 | 11.59 | 15600 | 1.0271 | 0.5741 |
| 0.8116 | 11.89 | 16000 | 0.9937 | 0.5741 |
| 0.7584 | 12.18 | 16400 | 0.9924 | 0.5626 |
| 0.7051 | 12.48 | 16800 | 1.0023 | 0.5572 |
| 0.7232 | 12.78 | 17200 | 1.0479 | 0.5512 |
| 0.7149 | 13.08 | 17600 | 1.0475 | 0.5765 |
| 0.6579 | 13.37 | 18000 | 1.0218 | 0.5552 |
| 0.6615 | 13.67 | 18400 | 1.0339 | 0.5631 |
| 0.6629 | 13.97 | 18800 | 1.0239 | 0.5621 |
| 0.6221 | 14.26 | 19200 | 1.0331 | 0.5537 |
| 0.6159 | 14.56 | 19600 | 1.0640 | 0.5532 |
| 0.6032 | 14.86 | 20000 | 1.0192 | 0.5567 |
| 0.5748 | 15.16 | 20400 | 1.0093 | 0.5507 |
| 0.5614 | 15.45 | 20800 | 1.0458 | 0.5472 |
| 0.5626 | 15.75 | 21200 | 1.0318 | 0.5398 |
| 0.5429 | 16.05 | 21600 | 1.0112 | 0.5278 |
| 0.5407 | 16.34 | 22000 | 1.0120 | 0.5278 |
| 0.511 | 16.64 | 22400 | 1.0335 | 0.5249 |
| 0.5316 | 16.94 | 22800 | 1.0146 | 0.5348 |
| 0.4949 | 17.24 | 23200 | 1.0287 | 0.5388 |
| 0.496 | 17.53 | 23600 | 1.0229 | 0.5348 |
| 0.4986 | 17.83 | 24000 | 1.0094 | 0.5313 |
| 0.4787 | 18.13 | 24400 | 1.0620 | 0.5234 |
| 0.4508 | 18.42 | 24800 | 1.0401 | 0.5323 |
| 0.4754 | 18.72 | 25200 | 1.0543 | 0.5303 |
| 0.4584 | 19.02 | 25600 | 1.0433 | 0.5194 |
| 0.4431 | 19.32 | 26000 | 1.0597 | 0.5249 |
| 0.4448 | 19.61 | 26400 | 1.0548 | 0.5229 |
| 0.4475 | 19.91 | 26800 | 1.0566 | 0.5224 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Declan/ChicagoTribune_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- conversational
---
#Rick & Morty DialoGPT Model |
Declan/ChicagoTribune_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/jeffreykofman/1660909090300/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1173374836/JKinLA_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jeff</div>
<div style="text-align: center; font-size: 14px;">@jeffreykofman</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jeff.
| Data | Jeff |
| --- | --- |
| Tweets downloaded | 931 |
| Retweets | 55 |
| Short tweets | 27 |
| Tweets kept | 849 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/26juxf9u/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jeffreykofman's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ev5sj6q) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ev5sj6q/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jeffreykofman')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Declan/NPR_model_v3 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- sasha/autotrain-data-BERTBase-TweetEval
co2_eq_emissions:
emissions: 0.04868905658915141
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1281249000
- CO2 Emissions (in grams): 0.0487
## Validation Metrics
- Loss: 0.602
- Accuracy: 0.743
- Macro F1: 0.723
- Micro F1: 0.743
- Weighted F1: 0.740
- Macro Precision: 0.740
- Micro Precision: 0.743
- Weighted Precision: 0.742
- Macro Recall: 0.712
- Micro Recall: 0.743
- Weighted Recall: 0.743
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/sasha/autotrain-BERTBase-TweetEval-1281249000
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("sasha/autotrain-BERTBase-TweetEval-1281249000", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("sasha/autotrain-BERTBase-TweetEval-1281249000", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Declan/NPR_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- sasha/autotrain-data-RobertaBaseTweetEval
co2_eq_emissions:
emissions: 28.053963781460215
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1281048989
- CO2 Emissions (in grams): 28.0540
## Validation Metrics
- Loss: 0.587
- Accuracy: 0.751
- Macro F1: 0.719
- Micro F1: 0.751
- Weighted F1: 0.746
- Macro Precision: 0.761
- Micro Precision: 0.751
- Weighted Precision: 0.753
- Macro Recall: 0.699
- Micro Recall: 0.751
- Weighted Recall: 0.751
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/sasha/autotrain-RobertaBaseTweetEval-1281048989
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("sasha/autotrain-RobertaBaseTweetEval-1281048989", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("sasha/autotrain-RobertaBaseTweetEval-1281048989", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
DeepBasak/Slack | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | # RNA_Project
# Projeto Final - Modelos Preditivos Conexionistas
### Aluno - Caio Emanoel Serpa Lopes
### Tutor - Vitor Casadei
---
|**Tipo de Projeto**|**Modelo Selecionado**|**Linguagem**|
|--|--|--|
|Classificação de Imagens|MobileNetV2|Tensorflow|
[Clique aqui para rodar o modelo via browser (roboflow)](https://classify.roboflow.com/?model=classifier_animals&version=2&api_key=IDPIYW7fvVaFbVq3eTlB)
# Performance
O modelo treinado possui performance de **100%**.
## Output do bloco de treinamento
<details>
<summary>Click to expand!</summary>
```Epoch 1/1000
2/2 [==============================] - ETA: 0s - loss: 1.0496 - accuracy: 0.3750
Epoch 1: saving model to training_1/cp.ckpt
2/2 [==============================] - 9s 4s/step - loss: 1.0496 - accuracy: 0.3750 - val_loss: 0.8153 - val_accuracy: 0.4237
Epoch 2/1000
2/2 [==============================] - ETA: 0s - loss: 1.0002 - accuracy: 0.3281
Epoch 2: saving model to training_1/cp.ckpt
2/2 [==============================] - 4s 2s/step - loss: 1.0002 - accuracy: 0.3281 - val_loss: 0.7967 - val_accuracy: 0.4407
Epoch 3/1000
2/2 [==============================] - ETA: 0s - loss: 1.0473 - accuracy: 0.3594
Epoch 3: saving model to training_1/cp.ckpt
2/2 [==============================] - 3s 2s/step - loss: 1.0473 - accuracy: 0.3594 - val_loss: 0.7953 - val_accuracy: 0.4237
Epoch 4/1000
2/2 [==============================] - ETA: 0s - loss: 0.9252 - accuracy: 0.3250
Epoch 4: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.9252 - accuracy: 0.3250 - val_loss: 0.8039 - val_accuracy: 0.3729
Epoch 5/1000
2/2 [==============================] - ETA: 0s - loss: 0.9771 - accuracy: 0.3000
Epoch 5: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 781ms/step - loss: 0.9771 - accuracy: 0.3000 - val_loss: 0.8116 - val_accuracy: 0.3729
Epoch 6/1000
2/2 [==============================] - ETA: 0s - loss: 0.9402 - accuracy: 0.3125
Epoch 6: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 789ms/step - loss: 0.9402 - accuracy: 0.3125 - val_loss: 0.8183 - val_accuracy: 0.3898
Epoch 7/1000
2/2 [==============================] - ETA: 0s - loss: 0.8416 - accuracy: 0.4750
Epoch 7: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.8416 - accuracy: 0.4750 - val_loss: 0.8229 - val_accuracy: 0.3898
Epoch 8/1000
2/2 [==============================] - ETA: 0s - loss: 0.8543 - accuracy: 0.3516
Epoch 8: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 913ms/step - loss: 0.8543 - accuracy: 0.3516 - val_loss: 0.8213 - val_accuracy: 0.4068
Epoch 9/1000
2/2 [==============================] - ETA: 0s - loss: 0.7657 - accuracy: 0.4844
Epoch 9: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 908ms/step - loss: 0.7657 - accuracy: 0.4844 - val_loss: 0.8124 - val_accuracy: 0.4068
Epoch 10/1000
2/2 [==============================] - ETA: 0s - loss: 0.8208 - accuracy: 0.3125
Epoch 10: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.8208 - accuracy: 0.3125 - val_loss: 0.8035 - val_accuracy: 0.4237
Epoch 11/1000
2/2 [==============================] - ETA: 0s - loss: 0.8510 - accuracy: 0.3875
Epoch 11: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 789ms/step - loss: 0.8510 - accuracy: 0.3875 - val_loss: 0.7868 - val_accuracy: 0.4237
Epoch 12/1000
2/2 [==============================] - ETA: 0s - loss: 0.7841 - accuracy: 0.4609
Epoch 12: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 896ms/step - loss: 0.7841 - accuracy: 0.4609 - val_loss: 0.7674 - val_accuracy: 0.4407
Epoch 13/1000
2/2 [==============================] - ETA: 0s - loss: 0.7320 - accuracy: 0.5125
Epoch 13: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.7320 - accuracy: 0.5125 - val_loss: 0.7513 - val_accuracy: 0.4576
Epoch 14/1000
2/2 [==============================] - ETA: 0s - loss: 0.7788 - accuracy: 0.3828
Epoch 14: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 908ms/step - loss: 0.7788 - accuracy: 0.3828 - val_loss: 0.7345 - val_accuracy: 0.4915
Epoch 15/1000
2/2 [==============================] - ETA: 0s - loss: 0.8054 - accuracy: 0.3250
Epoch 15: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 803ms/step - loss: 0.8054 - accuracy: 0.3250 - val_loss: 0.7162 - val_accuracy: 0.4915
Epoch 16/1000
2/2 [==============================] - ETA: 0s - loss: 0.7073 - accuracy: 0.5125
Epoch 16: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.7073 - accuracy: 0.5125 - val_loss: 0.6949 - val_accuracy: 0.5085
Epoch 17/1000
2/2 [==============================] - ETA: 0s - loss: 0.7984 - accuracy: 0.4250
Epoch 17: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.7984 - accuracy: 0.4250 - val_loss: 0.6756 - val_accuracy: 0.5424
Epoch 18/1000
2/2 [==============================] - ETA: 0s - loss: 0.7332 - accuracy: 0.4750
Epoch 18: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 777ms/step - loss: 0.7332 - accuracy: 0.4750 - val_loss: 0.6573 - val_accuracy: 0.5763
Epoch 19/1000
2/2 [==============================] - ETA: 0s - loss: 0.6789 - accuracy: 0.5000
Epoch 19: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 928ms/step - loss: 0.6789 - accuracy: 0.5000 - val_loss: 0.6398 - val_accuracy: 0.5763
Epoch 20/1000
2/2 [==============================] - ETA: 0s - loss: 0.7541 - accuracy: 0.4844
Epoch 20: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.7541 - accuracy: 0.4844 - val_loss: 0.6241 - val_accuracy: 0.5763
Epoch 21/1000
2/2 [==============================] - ETA: 0s - loss: 0.7528 - accuracy: 0.4688
Epoch 21: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.7528 - accuracy: 0.4688 - val_loss: 0.6103 - val_accuracy: 0.5763
Epoch 22/1000
2/2 [==============================] - ETA: 0s - loss: 0.6765 - accuracy: 0.5000
Epoch 22: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.6765 - accuracy: 0.5000 - val_loss: 0.5980 - val_accuracy: 0.5932
Epoch 23/1000
2/2 [==============================] - ETA: 0s - loss: 0.6817 - accuracy: 0.5625
Epoch 23: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.6817 - accuracy: 0.5625 - val_loss: 0.5890 - val_accuracy: 0.6102
Epoch 24/1000
2/2 [==============================] - ETA: 0s - loss: 0.7056 - accuracy: 0.4125
Epoch 24: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 785ms/step - loss: 0.7056 - accuracy: 0.4125 - val_loss: 0.5802 - val_accuracy: 0.6102
Epoch 25/1000
2/2 [==============================] - ETA: 0s - loss: 0.7238 - accuracy: 0.4453
Epoch 25: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.7238 - accuracy: 0.4453 - val_loss: 0.5716 - val_accuracy: 0.6102
Epoch 26/1000
2/2 [==============================] - ETA: 0s - loss: 0.6118 - accuracy: 0.4875
Epoch 26: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.6118 - accuracy: 0.4875 - val_loss: 0.5640 - val_accuracy: 0.6102
Epoch 27/1000
2/2 [==============================] - ETA: 0s - loss: 0.6136 - accuracy: 0.5250
Epoch 27: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.6136 - accuracy: 0.5250 - val_loss: 0.5557 - val_accuracy: 0.6102
Epoch 28/1000
2/2 [==============================] - ETA: 0s - loss: 0.6424 - accuracy: 0.5156
Epoch 28: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 925ms/step - loss: 0.6424 - accuracy: 0.5156 - val_loss: 0.5483 - val_accuracy: 0.6271
Epoch 29/1000
2/2 [==============================] - ETA: 0s - loss: 0.6367 - accuracy: 0.5703
Epoch 29: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 925ms/step - loss: 0.6367 - accuracy: 0.5703 - val_loss: 0.5409 - val_accuracy: 0.6102
Epoch 30/1000
2/2 [==============================] - ETA: 0s - loss: 0.5621 - accuracy: 0.6375
Epoch 30: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.5621 - accuracy: 0.6375 - val_loss: 0.5350 - val_accuracy: 0.6102
Epoch 31/1000
2/2 [==============================] - ETA: 0s - loss: 0.5903 - accuracy: 0.6625
Epoch 31: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 773ms/step - loss: 0.5903 - accuracy: 0.6625 - val_loss: 0.5297 - val_accuracy: 0.6102
Epoch 32/1000
2/2 [==============================] - ETA: 0s - loss: 0.5768 - accuracy: 0.5938
Epoch 32: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.5768 - accuracy: 0.5938 - val_loss: 0.5246 - val_accuracy: 0.5932
Epoch 33/1000
2/2 [==============================] - ETA: 0s - loss: 0.5517 - accuracy: 0.6625
Epoch 33: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 771ms/step - loss: 0.5517 - accuracy: 0.6625 - val_loss: 0.5197 - val_accuracy: 0.6102
Epoch 34/1000
2/2 [==============================] - ETA: 0s - loss: 0.5987 - accuracy: 0.5625
Epoch 34: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.5987 - accuracy: 0.5625 - val_loss: 0.5156 - val_accuracy: 0.6271
Epoch 35/1000
2/2 [==============================] - ETA: 0s - loss: 0.5768 - accuracy: 0.5859
Epoch 35: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 866ms/step - loss: 0.5768 - accuracy: 0.5859 - val_loss: 0.5116 - val_accuracy: 0.6271
Epoch 36/1000
2/2 [==============================] - ETA: 0s - loss: 0.5395 - accuracy: 0.7000
Epoch 36: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.5395 - accuracy: 0.7000 - val_loss: 0.5072 - val_accuracy: 0.6271
Epoch 37/1000
2/2 [==============================] - ETA: 0s - loss: 0.5549 - accuracy: 0.5625
Epoch 37: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.5549 - accuracy: 0.5625 - val_loss: 0.5027 - val_accuracy: 0.6271
Epoch 38/1000
2/2 [==============================] - ETA: 0s - loss: 0.5485 - accuracy: 0.5750
Epoch 38: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 783ms/step - loss: 0.5485 - accuracy: 0.5750 - val_loss: 0.4985 - val_accuracy: 0.6271
Epoch 39/1000
2/2 [==============================] - ETA: 0s - loss: 0.5600 - accuracy: 0.5875
Epoch 39: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.5600 - accuracy: 0.5875 - val_loss: 0.4944 - val_accuracy: 0.6441
Epoch 40/1000
2/2 [==============================] - ETA: 0s - loss: 0.5797 - accuracy: 0.6250
Epoch 40: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 766ms/step - loss: 0.5797 - accuracy: 0.6250 - val_loss: 0.4913 - val_accuracy: 0.6441
Epoch 41/1000
2/2 [==============================] - ETA: 0s - loss: 0.5891 - accuracy: 0.6125
Epoch 41: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 850ms/step - loss: 0.5891 - accuracy: 0.6125 - val_loss: 0.4880 - val_accuracy: 0.6610
Epoch 42/1000
2/2 [==============================] - ETA: 0s - loss: 0.5301 - accuracy: 0.6375
Epoch 42: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 810ms/step - loss: 0.5301 - accuracy: 0.6375 - val_loss: 0.4847 - val_accuracy: 0.6610
Epoch 43/1000
2/2 [==============================] - ETA: 0s - loss: 0.5775 - accuracy: 0.6328
Epoch 43: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 942ms/step - loss: 0.5775 - accuracy: 0.6328 - val_loss: 0.4796 - val_accuracy: 0.6610
Epoch 44/1000
2/2 [==============================] - ETA: 0s - loss: 0.4997 - accuracy: 0.6641
Epoch 44: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.4997 - accuracy: 0.6641 - val_loss: 0.4753 - val_accuracy: 0.6610
Epoch 45/1000
2/2 [==============================] - ETA: 0s - loss: 0.5236 - accuracy: 0.7109
Epoch 45: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.5236 - accuracy: 0.7109 - val_loss: 0.4713 - val_accuracy: 0.6780
Epoch 46/1000
2/2 [==============================] - ETA: 0s - loss: 0.5150 - accuracy: 0.6641
Epoch 46: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.5150 - accuracy: 0.6641 - val_loss: 0.4674 - val_accuracy: 0.6780
Epoch 47/1000
2/2 [==============================] - ETA: 0s - loss: 0.5213 - accuracy: 0.6625
Epoch 47: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.5213 - accuracy: 0.6625 - val_loss: 0.4637 - val_accuracy: 0.6780
Epoch 48/1000
2/2 [==============================] - ETA: 0s - loss: 0.5835 - accuracy: 0.6016
Epoch 48: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 913ms/step - loss: 0.5835 - accuracy: 0.6016 - val_loss: 0.4594 - val_accuracy: 0.6780
Epoch 49/1000
2/2 [==============================] - ETA: 0s - loss: 0.5356 - accuracy: 0.6641
Epoch 49: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.5356 - accuracy: 0.6641 - val_loss: 0.4551 - val_accuracy: 0.6780
Epoch 50/1000
2/2 [==============================] - ETA: 0s - loss: 0.5144 - accuracy: 0.6797
Epoch 50: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.5144 - accuracy: 0.6797 - val_loss: 0.4520 - val_accuracy: 0.6949
Epoch 51/1000
2/2 [==============================] - ETA: 0s - loss: 0.5832 - accuracy: 0.6875
Epoch 51: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.5832 - accuracy: 0.6875 - val_loss: 0.4498 - val_accuracy: 0.6949
Epoch 52/1000
2/2 [==============================] - ETA: 0s - loss: 0.5395 - accuracy: 0.6500
Epoch 52: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 795ms/step - loss: 0.5395 - accuracy: 0.6500 - val_loss: 0.4471 - val_accuracy: 0.6949
Epoch 53/1000
2/2 [==============================] - ETA: 0s - loss: 0.4901 - accuracy: 0.7188
Epoch 53: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 995ms/step - loss: 0.4901 - accuracy: 0.7188 - val_loss: 0.4434 - val_accuracy: 0.6949
Epoch 54/1000
2/2 [==============================] - ETA: 0s - loss: 0.4348 - accuracy: 0.7250
Epoch 54: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 796ms/step - loss: 0.4348 - accuracy: 0.7250 - val_loss: 0.4400 - val_accuracy: 0.6949
Epoch 55/1000
2/2 [==============================] - ETA: 0s - loss: 0.5062 - accuracy: 0.6641
Epoch 55: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.5062 - accuracy: 0.6641 - val_loss: 0.4370 - val_accuracy: 0.7119
Epoch 56/1000
2/2 [==============================] - ETA: 0s - loss: 0.5069 - accuracy: 0.5875
Epoch 56: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.5069 - accuracy: 0.5875 - val_loss: 0.4306 - val_accuracy: 0.7119
Epoch 57/1000
2/2 [==============================] - ETA: 0s - loss: 0.4512 - accuracy: 0.7125
Epoch 57: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.4512 - accuracy: 0.7125 - val_loss: 0.4254 - val_accuracy: 0.7119
Epoch 58/1000
2/2 [==============================] - ETA: 0s - loss: 0.5265 - accuracy: 0.6625
Epoch 58: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.5265 - accuracy: 0.6625 - val_loss: 0.4208 - val_accuracy: 0.7119
Epoch 59/1000
2/2 [==============================] - ETA: 0s - loss: 0.4557 - accuracy: 0.7375
Epoch 59: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 792ms/step - loss: 0.4557 - accuracy: 0.7375 - val_loss: 0.4171 - val_accuracy: 0.7119
Epoch 60/1000
2/2 [==============================] - ETA: 0s - loss: 0.5258 - accuracy: 0.6125
Epoch 60: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 793ms/step - loss: 0.5258 - accuracy: 0.6125 - val_loss: 0.4139 - val_accuracy: 0.7119
Epoch 61/1000
2/2 [==============================] - ETA: 0s - loss: 0.4988 - accuracy: 0.6641
Epoch 61: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.4988 - accuracy: 0.6641 - val_loss: 0.4117 - val_accuracy: 0.7119
Epoch 62/1000
2/2 [==============================] - ETA: 0s - loss: 0.5074 - accuracy: 0.6625
Epoch 62: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.5074 - accuracy: 0.6625 - val_loss: 0.4109 - val_accuracy: 0.7119
Epoch 63/1000
2/2 [==============================] - ETA: 0s - loss: 0.5155 - accuracy: 0.6797
Epoch 63: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.5155 - accuracy: 0.6797 - val_loss: 0.4105 - val_accuracy: 0.7119
Epoch 64/1000
2/2 [==============================] - ETA: 0s - loss: 0.4738 - accuracy: 0.7031
Epoch 64: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.4738 - accuracy: 0.7031 - val_loss: 0.4101 - val_accuracy: 0.7119
Epoch 65/1000
2/2 [==============================] - ETA: 0s - loss: 0.4526 - accuracy: 0.7266
Epoch 65: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.4526 - accuracy: 0.7266 - val_loss: 0.4099 - val_accuracy: 0.7288
Epoch 66/1000
2/2 [==============================] - ETA: 0s - loss: 0.4432 - accuracy: 0.6875
Epoch 66: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 917ms/step - loss: 0.4432 - accuracy: 0.6875 - val_loss: 0.4096 - val_accuracy: 0.7288
Epoch 67/1000
2/2 [==============================] - ETA: 0s - loss: 0.4556 - accuracy: 0.7031
Epoch 67: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 891ms/step - loss: 0.4556 - accuracy: 0.7031 - val_loss: 0.4089 - val_accuracy: 0.7288
Epoch 68/1000
2/2 [==============================] - ETA: 0s - loss: 0.4906 - accuracy: 0.7000
Epoch 68: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.4906 - accuracy: 0.7000 - val_loss: 0.4077 - val_accuracy: 0.7288
Epoch 69/1000
2/2 [==============================] - ETA: 0s - loss: 0.4392 - accuracy: 0.6953
Epoch 69: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 933ms/step - loss: 0.4392 - accuracy: 0.6953 - val_loss: 0.4067 - val_accuracy: 0.7288
Epoch 70/1000
2/2 [==============================] - ETA: 0s - loss: 0.4505 - accuracy: 0.7188
Epoch 70: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 911ms/step - loss: 0.4505 - accuracy: 0.7188 - val_loss: 0.4056 - val_accuracy: 0.7288
Epoch 71/1000
2/2 [==============================] - ETA: 0s - loss: 0.4227 - accuracy: 0.8250
Epoch 71: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.4227 - accuracy: 0.8250 - val_loss: 0.4038 - val_accuracy: 0.7288
Epoch 72/1000
2/2 [==============================] - ETA: 0s - loss: 0.4216 - accuracy: 0.7188
Epoch 72: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 942ms/step - loss: 0.4216 - accuracy: 0.7188 - val_loss: 0.4028 - val_accuracy: 0.7288
Epoch 73/1000
2/2 [==============================] - ETA: 0s - loss: 0.4563 - accuracy: 0.7031
Epoch 73: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.4563 - accuracy: 0.7031 - val_loss: 0.4029 - val_accuracy: 0.7288
Epoch 74/1000
2/2 [==============================] - ETA: 0s - loss: 0.4717 - accuracy: 0.6719
Epoch 74: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.4717 - accuracy: 0.6719 - val_loss: 0.4026 - val_accuracy: 0.7288
Epoch 75/1000
2/2 [==============================] - ETA: 0s - loss: 0.3515 - accuracy: 0.8250
Epoch 75: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.3515 - accuracy: 0.8250 - val_loss: 0.4009 - val_accuracy: 0.7119
Epoch 76/1000
2/2 [==============================] - ETA: 0s - loss: 0.4396 - accuracy: 0.7125
Epoch 76: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 795ms/step - loss: 0.4396 - accuracy: 0.7125 - val_loss: 0.4004 - val_accuracy: 0.7288
Epoch 77/1000
2/2 [==============================] - ETA: 0s - loss: 0.4737 - accuracy: 0.6250
Epoch 77: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.4737 - accuracy: 0.6250 - val_loss: 0.4002 - val_accuracy: 0.7458
Epoch 78/1000
2/2 [==============================] - ETA: 0s - loss: 0.3818 - accuracy: 0.8125
Epoch 78: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3818 - accuracy: 0.8125 - val_loss: 0.3997 - val_accuracy: 0.7458
Epoch 79/1000
2/2 [==============================] - ETA: 0s - loss: 0.3942 - accuracy: 0.7812
Epoch 79: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3942 - accuracy: 0.7812 - val_loss: 0.3999 - val_accuracy: 0.7458
Epoch 80/1000
2/2 [==============================] - ETA: 0s - loss: 0.4376 - accuracy: 0.7625
Epoch 80: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.4376 - accuracy: 0.7625 - val_loss: 0.3999 - val_accuracy: 0.7288
Epoch 81/1000
2/2 [==============================] - ETA: 0s - loss: 0.4146 - accuracy: 0.7875
Epoch 81: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.4146 - accuracy: 0.7875 - val_loss: 0.3985 - val_accuracy: 0.7458
Epoch 82/1000
2/2 [==============================] - ETA: 0s - loss: 0.4513 - accuracy: 0.7109
Epoch 82: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 952ms/step - loss: 0.4513 - accuracy: 0.7109 - val_loss: 0.3975 - val_accuracy: 0.7458
Epoch 83/1000
2/2 [==============================] - ETA: 0s - loss: 0.4000 - accuracy: 0.7875
Epoch 83: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.4000 - accuracy: 0.7875 - val_loss: 0.3966 - val_accuracy: 0.7458
Epoch 84/1000
2/2 [==============================] - ETA: 0s - loss: 0.3920 - accuracy: 0.7812
Epoch 84: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3920 - accuracy: 0.7812 - val_loss: 0.3957 - val_accuracy: 0.7458
Epoch 85/1000
2/2 [==============================] - ETA: 0s - loss: 0.4480 - accuracy: 0.6750
Epoch 85: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.4480 - accuracy: 0.6750 - val_loss: 0.3950 - val_accuracy: 0.7458
Epoch 86/1000
2/2 [==============================] - ETA: 0s - loss: 0.4010 - accuracy: 0.7656
Epoch 86: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 881ms/step - loss: 0.4010 - accuracy: 0.7656 - val_loss: 0.3956 - val_accuracy: 0.7288
Epoch 87/1000
2/2 [==============================] - ETA: 0s - loss: 0.4635 - accuracy: 0.7125
Epoch 87: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.4635 - accuracy: 0.7125 - val_loss: 0.3978 - val_accuracy: 0.7288
Epoch 88/1000
2/2 [==============================] - ETA: 0s - loss: 0.4501 - accuracy: 0.7188
Epoch 88: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 915ms/step - loss: 0.4501 - accuracy: 0.7188 - val_loss: 0.4002 - val_accuracy: 0.7627
Epoch 89/1000
2/2 [==============================] - ETA: 0s - loss: 0.3909 - accuracy: 0.7875
Epoch 89: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.3909 - accuracy: 0.7875 - val_loss: 0.4037 - val_accuracy: 0.7627
Epoch 90/1000
2/2 [==============================] - ETA: 0s - loss: 0.3992 - accuracy: 0.7250
Epoch 90: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.3992 - accuracy: 0.7250 - val_loss: 0.4045 - val_accuracy: 0.7627
Epoch 91/1000
2/2 [==============================] - ETA: 0s - loss: 0.4022 - accuracy: 0.8203
Epoch 91: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.4022 - accuracy: 0.8203 - val_loss: 0.4050 - val_accuracy: 0.7458
Epoch 92/1000
2/2 [==============================] - ETA: 0s - loss: 0.4112 - accuracy: 0.7031
Epoch 92: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 972ms/step - loss: 0.4112 - accuracy: 0.7031 - val_loss: 0.4050 - val_accuracy: 0.7458
Epoch 93/1000
2/2 [==============================] - ETA: 0s - loss: 0.3795 - accuracy: 0.7500
Epoch 93: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3795 - accuracy: 0.7500 - val_loss: 0.4046 - val_accuracy: 0.7458
Epoch 94/1000
2/2 [==============================] - ETA: 0s - loss: 0.4178 - accuracy: 0.7250
Epoch 94: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 786ms/step - loss: 0.4178 - accuracy: 0.7250 - val_loss: 0.4047 - val_accuracy: 0.7458
Epoch 95/1000
2/2 [==============================] - ETA: 0s - loss: 0.3446 - accuracy: 0.8281
Epoch 95: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3446 - accuracy: 0.8281 - val_loss: 0.4047 - val_accuracy: 0.7458
Epoch 96/1000
2/2 [==============================] - ETA: 0s - loss: 0.4607 - accuracy: 0.7250
Epoch 96: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.4607 - accuracy: 0.7250 - val_loss: 0.4035 - val_accuracy: 0.7458
Epoch 97/1000
2/2 [==============================] - ETA: 0s - loss: 0.3616 - accuracy: 0.7875
Epoch 97: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 809ms/step - loss: 0.3616 - accuracy: 0.7875 - val_loss: 0.4021 - val_accuracy: 0.7458
Epoch 98/1000
2/2 [==============================] - ETA: 0s - loss: 0.3380 - accuracy: 0.7375
Epoch 98: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 795ms/step - loss: 0.3380 - accuracy: 0.7375 - val_loss: 0.4014 - val_accuracy: 0.7458
Epoch 99/1000
2/2 [==============================] - ETA: 0s - loss: 0.3621 - accuracy: 0.8047
Epoch 99: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 925ms/step - loss: 0.3621 - accuracy: 0.8047 - val_loss: 0.3993 - val_accuracy: 0.7288
Epoch 100/1000
2/2 [==============================] - ETA: 0s - loss: 0.3969 - accuracy: 0.7578
Epoch 100: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 922ms/step - loss: 0.3969 - accuracy: 0.7578 - val_loss: 0.3952 - val_accuracy: 0.7288
Epoch 101/1000
2/2 [==============================] - ETA: 0s - loss: 0.3638 - accuracy: 0.7500
Epoch 101: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 807ms/step - loss: 0.3638 - accuracy: 0.7500 - val_loss: 0.3910 - val_accuracy: 0.7288
Epoch 102/1000
2/2 [==============================] - ETA: 0s - loss: 0.3590 - accuracy: 0.7891
Epoch 102: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 912ms/step - loss: 0.3590 - accuracy: 0.7891 - val_loss: 0.3877 - val_accuracy: 0.7288
Epoch 103/1000
2/2 [==============================] - ETA: 0s - loss: 0.3947 - accuracy: 0.7656
Epoch 103: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 959ms/step - loss: 0.3947 - accuracy: 0.7656 - val_loss: 0.3841 - val_accuracy: 0.7288
Epoch 104/1000
2/2 [==============================] - ETA: 0s - loss: 0.4289 - accuracy: 0.7250
Epoch 104: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 805ms/step - loss: 0.4289 - accuracy: 0.7250 - val_loss: 0.3815 - val_accuracy: 0.7288
Epoch 105/1000
2/2 [==============================] - ETA: 0s - loss: 0.3684 - accuracy: 0.8359
Epoch 105: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3684 - accuracy: 0.8359 - val_loss: 0.3784 - val_accuracy: 0.7288
Epoch 106/1000
2/2 [==============================] - ETA: 0s - loss: 0.3745 - accuracy: 0.8000
Epoch 106: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 866ms/step - loss: 0.3745 - accuracy: 0.8000 - val_loss: 0.3758 - val_accuracy: 0.7288
Epoch 107/1000
2/2 [==============================] - ETA: 0s - loss: 0.3485 - accuracy: 0.8125
Epoch 107: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 917ms/step - loss: 0.3485 - accuracy: 0.8125 - val_loss: 0.3743 - val_accuracy: 0.7458
Epoch 108/1000
2/2 [==============================] - ETA: 0s - loss: 0.3889 - accuracy: 0.8000
Epoch 108: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 997ms/step - loss: 0.3889 - accuracy: 0.8000 - val_loss: 0.3726 - val_accuracy: 0.7458
Epoch 109/1000
2/2 [==============================] - ETA: 0s - loss: 0.3484 - accuracy: 0.8672
Epoch 109: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 937ms/step - loss: 0.3484 - accuracy: 0.8672 - val_loss: 0.3712 - val_accuracy: 0.7458
Epoch 110/1000
2/2 [==============================] - ETA: 0s - loss: 0.3734 - accuracy: 0.8047
Epoch 110: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3734 - accuracy: 0.8047 - val_loss: 0.3696 - val_accuracy: 0.7458
Epoch 111/1000
2/2 [==============================] - ETA: 0s - loss: 0.4089 - accuracy: 0.7875
Epoch 111: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 789ms/step - loss: 0.4089 - accuracy: 0.7875 - val_loss: 0.3676 - val_accuracy: 0.7458
Epoch 112/1000
2/2 [==============================] - ETA: 0s - loss: 0.3788 - accuracy: 0.7750
Epoch 112: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 783ms/step - loss: 0.3788 - accuracy: 0.7750 - val_loss: 0.3646 - val_accuracy: 0.7288
Epoch 113/1000
2/2 [==============================] - ETA: 0s - loss: 0.3728 - accuracy: 0.7812
Epoch 113: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3728 - accuracy: 0.7812 - val_loss: 0.3621 - val_accuracy: 0.7288
Epoch 114/1000
2/2 [==============================] - ETA: 0s - loss: 0.3751 - accuracy: 0.8000
Epoch 114: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.3751 - accuracy: 0.8000 - val_loss: 0.3599 - val_accuracy: 0.7288
Epoch 115/1000
2/2 [==============================] - ETA: 0s - loss: 0.3739 - accuracy: 0.7734
Epoch 115: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 946ms/step - loss: 0.3739 - accuracy: 0.7734 - val_loss: 0.3578 - val_accuracy: 0.7288
Epoch 116/1000
2/2 [==============================] - ETA: 0s - loss: 0.3883 - accuracy: 0.8000
Epoch 116: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.3883 - accuracy: 0.8000 - val_loss: 0.3563 - val_accuracy: 0.7288
Epoch 117/1000
2/2 [==============================] - ETA: 0s - loss: 0.3443 - accuracy: 0.8203
Epoch 117: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3443 - accuracy: 0.8203 - val_loss: 0.3552 - val_accuracy: 0.7458
Epoch 118/1000
2/2 [==============================] - ETA: 0s - loss: 0.3449 - accuracy: 0.8375
Epoch 118: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.3449 - accuracy: 0.8375 - val_loss: 0.3555 - val_accuracy: 0.7458
Epoch 119/1000
2/2 [==============================] - ETA: 0s - loss: 0.3562 - accuracy: 0.8000
Epoch 119: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.3562 - accuracy: 0.8000 - val_loss: 0.3556 - val_accuracy: 0.7458
Epoch 120/1000
2/2 [==============================] - ETA: 0s - loss: 0.2561 - accuracy: 0.8828
Epoch 120: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 914ms/step - loss: 0.2561 - accuracy: 0.8828 - val_loss: 0.3562 - val_accuracy: 0.7458
Epoch 121/1000
2/2 [==============================] - ETA: 0s - loss: 0.3495 - accuracy: 0.8125
Epoch 121: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 916ms/step - loss: 0.3495 - accuracy: 0.8125 - val_loss: 0.3566 - val_accuracy: 0.7627
Epoch 122/1000
2/2 [==============================] - ETA: 0s - loss: 0.3165 - accuracy: 0.8672
Epoch 122: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3165 - accuracy: 0.8672 - val_loss: 0.3566 - val_accuracy: 0.7627
Epoch 123/1000
2/2 [==============================] - ETA: 0s - loss: 0.3741 - accuracy: 0.7734
Epoch 123: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3741 - accuracy: 0.7734 - val_loss: 0.3571 - val_accuracy: 0.7627
Epoch 124/1000
2/2 [==============================] - ETA: 0s - loss: 0.3923 - accuracy: 0.7500
Epoch 124: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 955ms/step - loss: 0.3923 - accuracy: 0.7500 - val_loss: 0.3574 - val_accuracy: 0.7627
Epoch 125/1000
2/2 [==============================] - ETA: 0s - loss: 0.3380 - accuracy: 0.7812
Epoch 125: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 912ms/step - loss: 0.3380 - accuracy: 0.7812 - val_loss: 0.3575 - val_accuracy: 0.7627
Epoch 126/1000
2/2 [==============================] - ETA: 0s - loss: 0.3617 - accuracy: 0.7875
Epoch 126: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.3617 - accuracy: 0.7875 - val_loss: 0.3581 - val_accuracy: 0.7627
Epoch 127/1000
2/2 [==============================] - ETA: 0s - loss: 0.4007 - accuracy: 0.7000
Epoch 127: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.4007 - accuracy: 0.7000 - val_loss: 0.3577 - val_accuracy: 0.7627
Epoch 128/1000
2/2 [==============================] - ETA: 0s - loss: 0.3632 - accuracy: 0.8000
Epoch 128: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.3632 - accuracy: 0.8000 - val_loss: 0.3570 - val_accuracy: 0.7627
Epoch 129/1000
2/2 [==============================] - ETA: 0s - loss: 0.3418 - accuracy: 0.8359
Epoch 129: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3418 - accuracy: 0.8359 - val_loss: 0.3558 - val_accuracy: 0.7627
Epoch 130/1000
2/2 [==============================] - ETA: 0s - loss: 0.3338 - accuracy: 0.8250
Epoch 130: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 815ms/step - loss: 0.3338 - accuracy: 0.8250 - val_loss: 0.3545 - val_accuracy: 0.7627
Epoch 131/1000
2/2 [==============================] - ETA: 0s - loss: 0.3705 - accuracy: 0.7750
Epoch 131: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.3705 - accuracy: 0.7750 - val_loss: 0.3534 - val_accuracy: 0.7627
Epoch 132/1000
2/2 [==============================] - ETA: 0s - loss: 0.2992 - accuracy: 0.8625
Epoch 132: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2992 - accuracy: 0.8625 - val_loss: 0.3531 - val_accuracy: 0.7627
Epoch 133/1000
2/2 [==============================] - ETA: 0s - loss: 0.3112 - accuracy: 0.8438
Epoch 133: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 940ms/step - loss: 0.3112 - accuracy: 0.8438 - val_loss: 0.3533 - val_accuracy: 0.7627
Epoch 134/1000
2/2 [==============================] - ETA: 0s - loss: 0.3687 - accuracy: 0.8203
Epoch 134: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 926ms/step - loss: 0.3687 - accuracy: 0.8203 - val_loss: 0.3521 - val_accuracy: 0.7627
Epoch 135/1000
2/2 [==============================] - ETA: 0s - loss: 0.4165 - accuracy: 0.7250
Epoch 135: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.4165 - accuracy: 0.7250 - val_loss: 0.3497 - val_accuracy: 0.7627
Epoch 136/1000
2/2 [==============================] - ETA: 0s - loss: 0.2755 - accuracy: 0.8750
Epoch 136: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 801ms/step - loss: 0.2755 - accuracy: 0.8750 - val_loss: 0.3483 - val_accuracy: 0.7627
Epoch 137/1000
2/2 [==============================] - ETA: 0s - loss: 0.3457 - accuracy: 0.8000
Epoch 137: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 783ms/step - loss: 0.3457 - accuracy: 0.8000 - val_loss: 0.3478 - val_accuracy: 0.7627
Epoch 138/1000
2/2 [==============================] - ETA: 0s - loss: 0.3676 - accuracy: 0.7812
Epoch 138: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3676 - accuracy: 0.7812 - val_loss: 0.3470 - val_accuracy: 0.7627
Epoch 139/1000
2/2 [==============================] - ETA: 0s - loss: 0.3189 - accuracy: 0.7875
Epoch 139: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 781ms/step - loss: 0.3189 - accuracy: 0.7875 - val_loss: 0.3467 - val_accuracy: 0.7627
Epoch 140/1000
2/2 [==============================] - ETA: 0s - loss: 0.3633 - accuracy: 0.7875
Epoch 140: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.3633 - accuracy: 0.7875 - val_loss: 0.3483 - val_accuracy: 0.7627
Epoch 141/1000
2/2 [==============================] - ETA: 0s - loss: 0.3355 - accuracy: 0.7875
Epoch 141: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 852ms/step - loss: 0.3355 - accuracy: 0.7875 - val_loss: 0.3495 - val_accuracy: 0.7627
Epoch 142/1000
2/2 [==============================] - ETA: 0s - loss: 0.3416 - accuracy: 0.8250
Epoch 142: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 796ms/step - loss: 0.3416 - accuracy: 0.8250 - val_loss: 0.3497 - val_accuracy: 0.7627
Epoch 143/1000
2/2 [==============================] - ETA: 0s - loss: 0.3214 - accuracy: 0.8438
Epoch 143: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3214 - accuracy: 0.8438 - val_loss: 0.3494 - val_accuracy: 0.7627
Epoch 144/1000
2/2 [==============================] - ETA: 0s - loss: 0.3541 - accuracy: 0.7875
Epoch 144: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3541 - accuracy: 0.7875 - val_loss: 0.3490 - val_accuracy: 0.7627
Epoch 145/1000
2/2 [==============================] - ETA: 0s - loss: 0.3347 - accuracy: 0.8500
Epoch 145: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 806ms/step - loss: 0.3347 - accuracy: 0.8500 - val_loss: 0.3488 - val_accuracy: 0.7627
Epoch 146/1000
2/2 [==============================] - ETA: 0s - loss: 0.3238 - accuracy: 0.8594
Epoch 146: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 969ms/step - loss: 0.3238 - accuracy: 0.8594 - val_loss: 0.3493 - val_accuracy: 0.7627
Epoch 147/1000
2/2 [==============================] - ETA: 0s - loss: 0.3252 - accuracy: 0.8250
Epoch 147: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 799ms/step - loss: 0.3252 - accuracy: 0.8250 - val_loss: 0.3499 - val_accuracy: 0.7627
Epoch 148/1000
2/2 [==============================] - ETA: 0s - loss: 0.3136 - accuracy: 0.8250
Epoch 148: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 766ms/step - loss: 0.3136 - accuracy: 0.8250 - val_loss: 0.3515 - val_accuracy: 0.7627
Epoch 149/1000
2/2 [==============================] - ETA: 0s - loss: 0.3215 - accuracy: 0.8250
Epoch 149: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.3215 - accuracy: 0.8250 - val_loss: 0.3529 - val_accuracy: 0.7627
Epoch 150/1000
2/2 [==============================] - ETA: 0s - loss: 0.3838 - accuracy: 0.7625
Epoch 150: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.3838 - accuracy: 0.7625 - val_loss: 0.3546 - val_accuracy: 0.7627
Epoch 151/1000
2/2 [==============================] - ETA: 0s - loss: 0.3322 - accuracy: 0.8125
Epoch 151: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 809ms/step - loss: 0.3322 - accuracy: 0.8125 - val_loss: 0.3537 - val_accuracy: 0.7627
Epoch 152/1000
2/2 [==============================] - ETA: 0s - loss: 0.3422 - accuracy: 0.8281
Epoch 152: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 913ms/step - loss: 0.3422 - accuracy: 0.8281 - val_loss: 0.3523 - val_accuracy: 0.7627
Epoch 153/1000
2/2 [==============================] - ETA: 0s - loss: 0.3141 - accuracy: 0.8500
Epoch 153: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 876ms/step - loss: 0.3141 - accuracy: 0.8500 - val_loss: 0.3495 - val_accuracy: 0.7627
Epoch 154/1000
2/2 [==============================] - ETA: 0s - loss: 0.3786 - accuracy: 0.7625
Epoch 154: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.3786 - accuracy: 0.7625 - val_loss: 0.3458 - val_accuracy: 0.7627
Epoch 155/1000
2/2 [==============================] - ETA: 0s - loss: 0.3309 - accuracy: 0.8125
Epoch 155: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3309 - accuracy: 0.8125 - val_loss: 0.3425 - val_accuracy: 0.7627
Epoch 156/1000
2/2 [==============================] - ETA: 0s - loss: 0.3570 - accuracy: 0.7969
Epoch 156: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 928ms/step - loss: 0.3570 - accuracy: 0.7969 - val_loss: 0.3386 - val_accuracy: 0.7797
Epoch 157/1000
2/2 [==============================] - ETA: 0s - loss: 0.3137 - accuracy: 0.8250
Epoch 157: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 779ms/step - loss: 0.3137 - accuracy: 0.8250 - val_loss: 0.3349 - val_accuracy: 0.7797
Epoch 158/1000
2/2 [==============================] - ETA: 0s - loss: 0.3485 - accuracy: 0.8281
Epoch 158: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3485 - accuracy: 0.8281 - val_loss: 0.3321 - val_accuracy: 0.7797
Epoch 159/1000
2/2 [==============================] - ETA: 0s - loss: 0.3114 - accuracy: 0.8594
Epoch 159: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 997ms/step - loss: 0.3114 - accuracy: 0.8594 - val_loss: 0.3295 - val_accuracy: 0.7797
Epoch 160/1000
2/2 [==============================] - ETA: 0s - loss: 0.3695 - accuracy: 0.7750
Epoch 160: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.3695 - accuracy: 0.7750 - val_loss: 0.3255 - val_accuracy: 0.7797
Epoch 161/1000
2/2 [==============================] - ETA: 0s - loss: 0.3590 - accuracy: 0.8125
Epoch 161: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 794ms/step - loss: 0.3590 - accuracy: 0.8125 - val_loss: 0.3215 - val_accuracy: 0.7797
Epoch 162/1000
2/2 [==============================] - ETA: 0s - loss: 0.3375 - accuracy: 0.8250
Epoch 162: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.3375 - accuracy: 0.8250 - val_loss: 0.3184 - val_accuracy: 0.7797
Epoch 163/1000
2/2 [==============================] - ETA: 0s - loss: 0.2919 - accuracy: 0.8672
Epoch 163: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2919 - accuracy: 0.8672 - val_loss: 0.3172 - val_accuracy: 0.7797
Epoch 164/1000
2/2 [==============================] - ETA: 0s - loss: 0.2972 - accuracy: 0.8594
Epoch 164: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 937ms/step - loss: 0.2972 - accuracy: 0.8594 - val_loss: 0.3171 - val_accuracy: 0.7797
Epoch 165/1000
2/2 [==============================] - ETA: 0s - loss: 0.3267 - accuracy: 0.8359
Epoch 165: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3267 - accuracy: 0.8359 - val_loss: 0.3175 - val_accuracy: 0.7797
Epoch 166/1000
2/2 [==============================] - ETA: 0s - loss: 0.2999 - accuracy: 0.8438
Epoch 166: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2999 - accuracy: 0.8438 - val_loss: 0.3182 - val_accuracy: 0.7797
Epoch 167/1000
2/2 [==============================] - ETA: 0s - loss: 0.3014 - accuracy: 0.8750
Epoch 167: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 787ms/step - loss: 0.3014 - accuracy: 0.8750 - val_loss: 0.3198 - val_accuracy: 0.7797
Epoch 168/1000
2/2 [==============================] - ETA: 0s - loss: 0.2670 - accuracy: 0.8250
Epoch 168: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 810ms/step - loss: 0.2670 - accuracy: 0.8250 - val_loss: 0.3217 - val_accuracy: 0.7797
Epoch 169/1000
2/2 [==============================] - ETA: 0s - loss: 0.3162 - accuracy: 0.8750
Epoch 169: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 793ms/step - loss: 0.3162 - accuracy: 0.8750 - val_loss: 0.3219 - val_accuracy: 0.7797
Epoch 170/1000
2/2 [==============================] - ETA: 0s - loss: 0.3178 - accuracy: 0.8047
Epoch 170: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 943ms/step - loss: 0.3178 - accuracy: 0.8047 - val_loss: 0.3221 - val_accuracy: 0.7797
Epoch 171/1000
2/2 [==============================] - ETA: 0s - loss: 0.2931 - accuracy: 0.8672
Epoch 171: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 923ms/step - loss: 0.2931 - accuracy: 0.8672 - val_loss: 0.3225 - val_accuracy: 0.7797
Epoch 172/1000
2/2 [==============================] - ETA: 0s - loss: 0.3197 - accuracy: 0.8047
Epoch 172: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3197 - accuracy: 0.8047 - val_loss: 0.3238 - val_accuracy: 0.7797
Epoch 173/1000
2/2 [==============================] - ETA: 0s - loss: 0.2872 - accuracy: 0.8281
Epoch 173: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2872 - accuracy: 0.8281 - val_loss: 0.3255 - val_accuracy: 0.7797
Epoch 174/1000
2/2 [==============================] - ETA: 0s - loss: 0.3595 - accuracy: 0.7734
Epoch 174: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3595 - accuracy: 0.7734 - val_loss: 0.3273 - val_accuracy: 0.7797
Epoch 175/1000
2/2 [==============================] - ETA: 0s - loss: 0.3140 - accuracy: 0.8375
Epoch 175: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 811ms/step - loss: 0.3140 - accuracy: 0.8375 - val_loss: 0.3280 - val_accuracy: 0.7797
Epoch 176/1000
2/2 [==============================] - ETA: 0s - loss: 0.3210 - accuracy: 0.8125
Epoch 176: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3210 - accuracy: 0.8125 - val_loss: 0.3281 - val_accuracy: 0.7797
Epoch 177/1000
2/2 [==============================] - ETA: 0s - loss: 0.2593 - accuracy: 0.8125
Epoch 177: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2593 - accuracy: 0.8125 - val_loss: 0.3297 - val_accuracy: 0.7797
Epoch 178/1000
2/2 [==============================] - ETA: 0s - loss: 0.3493 - accuracy: 0.7891
Epoch 178: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3493 - accuracy: 0.7891 - val_loss: 0.3316 - val_accuracy: 0.7797
Epoch 179/1000
2/2 [==============================] - ETA: 0s - loss: 0.3391 - accuracy: 0.8375
Epoch 179: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.3391 - accuracy: 0.8375 - val_loss: 0.3345 - val_accuracy: 0.7797
Epoch 180/1000
2/2 [==============================] - ETA: 0s - loss: 0.2908 - accuracy: 0.8438
Epoch 180: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2908 - accuracy: 0.8438 - val_loss: 0.3373 - val_accuracy: 0.7797
Epoch 181/1000
2/2 [==============================] - ETA: 0s - loss: 0.2884 - accuracy: 0.8438
Epoch 181: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 912ms/step - loss: 0.2884 - accuracy: 0.8438 - val_loss: 0.3386 - val_accuracy: 0.7797
Epoch 182/1000
2/2 [==============================] - ETA: 0s - loss: 0.2741 - accuracy: 0.8750
Epoch 182: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2741 - accuracy: 0.8750 - val_loss: 0.3397 - val_accuracy: 0.7966
Epoch 183/1000
2/2 [==============================] - ETA: 0s - loss: 0.3079 - accuracy: 0.8375
Epoch 183: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.3079 - accuracy: 0.8375 - val_loss: 0.3402 - val_accuracy: 0.7966
Epoch 184/1000
2/2 [==============================] - ETA: 0s - loss: 0.2915 - accuracy: 0.8500
Epoch 184: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 821ms/step - loss: 0.2915 - accuracy: 0.8500 - val_loss: 0.3408 - val_accuracy: 0.8136
Epoch 185/1000
2/2 [==============================] - ETA: 0s - loss: 0.2488 - accuracy: 0.9062
Epoch 185: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2488 - accuracy: 0.9062 - val_loss: 0.3411 - val_accuracy: 0.8136
Epoch 186/1000
2/2 [==============================] - ETA: 0s - loss: 0.2850 - accuracy: 0.8281
Epoch 186: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2850 - accuracy: 0.8281 - val_loss: 0.3412 - val_accuracy: 0.8136
Epoch 187/1000
2/2 [==============================] - ETA: 0s - loss: 0.3010 - accuracy: 0.8375
Epoch 187: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 816ms/step - loss: 0.3010 - accuracy: 0.8375 - val_loss: 0.3412 - val_accuracy: 0.7966
Epoch 188/1000
2/2 [==============================] - ETA: 0s - loss: 0.2825 - accuracy: 0.8594
Epoch 188: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 979ms/step - loss: 0.2825 - accuracy: 0.8594 - val_loss: 0.3410 - val_accuracy: 0.7966
Epoch 189/1000
2/2 [==============================] - ETA: 0s - loss: 0.3138 - accuracy: 0.8125
Epoch 189: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 956ms/step - loss: 0.3138 - accuracy: 0.8125 - val_loss: 0.3392 - val_accuracy: 0.7966
Epoch 190/1000
2/2 [==============================] - ETA: 0s - loss: 0.3285 - accuracy: 0.8000
Epoch 190: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 793ms/step - loss: 0.3285 - accuracy: 0.8000 - val_loss: 0.3374 - val_accuracy: 0.8136
Epoch 191/1000
2/2 [==============================] - ETA: 0s - loss: 0.3562 - accuracy: 0.7375
Epoch 191: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 794ms/step - loss: 0.3562 - accuracy: 0.7375 - val_loss: 0.3362 - val_accuracy: 0.8305
Epoch 192/1000
2/2 [==============================] - ETA: 0s - loss: 0.2750 - accuracy: 0.8625
Epoch 192: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 805ms/step - loss: 0.2750 - accuracy: 0.8625 - val_loss: 0.3371 - val_accuracy: 0.8305
Epoch 193/1000
2/2 [==============================] - ETA: 0s - loss: 0.2853 - accuracy: 0.8750
Epoch 193: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 778ms/step - loss: 0.2853 - accuracy: 0.8750 - val_loss: 0.3378 - val_accuracy: 0.8305
Epoch 194/1000
2/2 [==============================] - ETA: 0s - loss: 0.2862 - accuracy: 0.8625
Epoch 194: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2862 - accuracy: 0.8625 - val_loss: 0.3387 - val_accuracy: 0.8136
Epoch 195/1000
2/2 [==============================] - ETA: 0s - loss: 0.3483 - accuracy: 0.7625
Epoch 195: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.3483 - accuracy: 0.7625 - val_loss: 0.3393 - val_accuracy: 0.8136
Epoch 196/1000
2/2 [==============================] - ETA: 0s - loss: 0.2863 - accuracy: 0.8594
Epoch 196: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2863 - accuracy: 0.8594 - val_loss: 0.3378 - val_accuracy: 0.8136
Epoch 197/1000
2/2 [==============================] - ETA: 0s - loss: 0.2744 - accuracy: 0.8500
Epoch 197: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 824ms/step - loss: 0.2744 - accuracy: 0.8500 - val_loss: 0.3355 - val_accuracy: 0.8136
Epoch 198/1000
2/2 [==============================] - ETA: 0s - loss: 0.2827 - accuracy: 0.8438
Epoch 198: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 952ms/step - loss: 0.2827 - accuracy: 0.8438 - val_loss: 0.3326 - val_accuracy: 0.8136
Epoch 199/1000
2/2 [==============================] - ETA: 0s - loss: 0.2542 - accuracy: 0.8875
Epoch 199: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 815ms/step - loss: 0.2542 - accuracy: 0.8875 - val_loss: 0.3295 - val_accuracy: 0.8136
Epoch 200/1000
2/2 [==============================] - ETA: 0s - loss: 0.2779 - accuracy: 0.8672
Epoch 200: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2779 - accuracy: 0.8672 - val_loss: 0.3259 - val_accuracy: 0.8305
Epoch 201/1000
2/2 [==============================] - ETA: 0s - loss: 0.3151 - accuracy: 0.8516
Epoch 201: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.3151 - accuracy: 0.8516 - val_loss: 0.3212 - val_accuracy: 0.8305
Epoch 202/1000
2/2 [==============================] - ETA: 0s - loss: 0.2635 - accuracy: 0.8438
Epoch 202: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2635 - accuracy: 0.8438 - val_loss: 0.3172 - val_accuracy: 0.8305
Epoch 203/1000
2/2 [==============================] - ETA: 0s - loss: 0.2691 - accuracy: 0.8906
Epoch 203: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2691 - accuracy: 0.8906 - val_loss: 0.3138 - val_accuracy: 0.8305
Epoch 204/1000
2/2 [==============================] - ETA: 0s - loss: 0.2818 - accuracy: 0.8500
Epoch 204: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2818 - accuracy: 0.8500 - val_loss: 0.3109 - val_accuracy: 0.8305
Epoch 205/1000
2/2 [==============================] - ETA: 0s - loss: 0.2874 - accuracy: 0.8125
Epoch 205: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2874 - accuracy: 0.8125 - val_loss: 0.3089 - val_accuracy: 0.8136
Epoch 206/1000
2/2 [==============================] - ETA: 0s - loss: 0.2961 - accuracy: 0.8500
Epoch 206: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 821ms/step - loss: 0.2961 - accuracy: 0.8500 - val_loss: 0.3080 - val_accuracy: 0.8136
Epoch 207/1000
2/2 [==============================] - ETA: 0s - loss: 0.2628 - accuracy: 0.8516
Epoch 207: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2628 - accuracy: 0.8516 - val_loss: 0.3077 - val_accuracy: 0.8136
Epoch 208/1000
2/2 [==============================] - ETA: 0s - loss: 0.2807 - accuracy: 0.8750
Epoch 208: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 792ms/step - loss: 0.2807 - accuracy: 0.8750 - val_loss: 0.3076 - val_accuracy: 0.8136
Epoch 209/1000
2/2 [==============================] - ETA: 0s - loss: 0.2190 - accuracy: 0.8828
Epoch 209: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 902ms/step - loss: 0.2190 - accuracy: 0.8828 - val_loss: 0.3073 - val_accuracy: 0.8136
Epoch 210/1000
2/2 [==============================] - ETA: 0s - loss: 0.2307 - accuracy: 0.8875
Epoch 210: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2307 - accuracy: 0.8875 - val_loss: 0.3073 - val_accuracy: 0.8136
Epoch 211/1000
2/2 [==============================] - ETA: 0s - loss: 0.2403 - accuracy: 0.8672
Epoch 211: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2403 - accuracy: 0.8672 - val_loss: 0.3079 - val_accuracy: 0.8136
Epoch 212/1000
2/2 [==============================] - ETA: 0s - loss: 0.2151 - accuracy: 0.9375
Epoch 212: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2151 - accuracy: 0.9375 - val_loss: 0.3075 - val_accuracy: 0.8136
Epoch 213/1000
2/2 [==============================] - ETA: 0s - loss: 0.2767 - accuracy: 0.8875
Epoch 213: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 795ms/step - loss: 0.2767 - accuracy: 0.8875 - val_loss: 0.3060 - val_accuracy: 0.8136
Epoch 214/1000
2/2 [==============================] - ETA: 0s - loss: 0.2731 - accuracy: 0.8672
Epoch 214: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2731 - accuracy: 0.8672 - val_loss: 0.3040 - val_accuracy: 0.8136
Epoch 215/1000
2/2 [==============================] - ETA: 0s - loss: 0.2449 - accuracy: 0.8828
Epoch 215: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2449 - accuracy: 0.8828 - val_loss: 0.3022 - val_accuracy: 0.8136
Epoch 216/1000
2/2 [==============================] - ETA: 0s - loss: 0.2654 - accuracy: 0.8203
Epoch 216: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2654 - accuracy: 0.8203 - val_loss: 0.2999 - val_accuracy: 0.8136
Epoch 217/1000
2/2 [==============================] - ETA: 0s - loss: 0.2781 - accuracy: 0.8672
Epoch 217: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2781 - accuracy: 0.8672 - val_loss: 0.2985 - val_accuracy: 0.8136
Epoch 218/1000
2/2 [==============================] - ETA: 0s - loss: 0.3467 - accuracy: 0.7875
Epoch 218: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 808ms/step - loss: 0.3467 - accuracy: 0.7875 - val_loss: 0.2967 - val_accuracy: 0.8136
Epoch 219/1000
2/2 [==============================] - ETA: 0s - loss: 0.2858 - accuracy: 0.8750
Epoch 219: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2858 - accuracy: 0.8750 - val_loss: 0.2970 - val_accuracy: 0.8136
Epoch 220/1000
2/2 [==============================] - ETA: 0s - loss: 0.2070 - accuracy: 0.9125
Epoch 220: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2070 - accuracy: 0.9125 - val_loss: 0.2983 - val_accuracy: 0.8136
Epoch 221/1000
2/2 [==============================] - ETA: 0s - loss: 0.2974 - accuracy: 0.8359
Epoch 221: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2974 - accuracy: 0.8359 - val_loss: 0.2998 - val_accuracy: 0.8136
Epoch 222/1000
2/2 [==============================] - ETA: 0s - loss: 0.2884 - accuracy: 0.8625
Epoch 222: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 806ms/step - loss: 0.2884 - accuracy: 0.8625 - val_loss: 0.3019 - val_accuracy: 0.8136
Epoch 223/1000
2/2 [==============================] - ETA: 0s - loss: 0.2783 - accuracy: 0.8438
Epoch 223: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2783 - accuracy: 0.8438 - val_loss: 0.3043 - val_accuracy: 0.8136
Epoch 224/1000
2/2 [==============================] - ETA: 0s - loss: 0.2062 - accuracy: 0.8875
Epoch 224: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2062 - accuracy: 0.8875 - val_loss: 0.3075 - val_accuracy: 0.8136
Epoch 225/1000
2/2 [==============================] - ETA: 0s - loss: 0.2499 - accuracy: 0.8500
Epoch 225: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2499 - accuracy: 0.8500 - val_loss: 0.3094 - val_accuracy: 0.8136
Epoch 226/1000
2/2 [==============================] - ETA: 0s - loss: 0.2541 - accuracy: 0.8672
Epoch 226: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 957ms/step - loss: 0.2541 - accuracy: 0.8672 - val_loss: 0.3105 - val_accuracy: 0.8136
Epoch 227/1000
2/2 [==============================] - ETA: 0s - loss: 0.2353 - accuracy: 0.8672
Epoch 227: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 903ms/step - loss: 0.2353 - accuracy: 0.8672 - val_loss: 0.3106 - val_accuracy: 0.8305
Epoch 228/1000
2/2 [==============================] - ETA: 0s - loss: 0.2782 - accuracy: 0.8375
Epoch 228: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 792ms/step - loss: 0.2782 - accuracy: 0.8375 - val_loss: 0.3112 - val_accuracy: 0.8305
Epoch 229/1000
2/2 [==============================] - ETA: 0s - loss: 0.2693 - accuracy: 0.8875
Epoch 229: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 795ms/step - loss: 0.2693 - accuracy: 0.8875 - val_loss: 0.3124 - val_accuracy: 0.8305
Epoch 230/1000
2/2 [==============================] - ETA: 0s - loss: 0.2889 - accuracy: 0.8281
Epoch 230: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 943ms/step - loss: 0.2889 - accuracy: 0.8281 - val_loss: 0.3135 - val_accuracy: 0.8305
Epoch 231/1000
2/2 [==============================] - ETA: 0s - loss: 0.2589 - accuracy: 0.8984
Epoch 231: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 907ms/step - loss: 0.2589 - accuracy: 0.8984 - val_loss: 0.3135 - val_accuracy: 0.8305
Epoch 232/1000
2/2 [==============================] - ETA: 0s - loss: 0.2456 - accuracy: 0.8984
Epoch 232: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2456 - accuracy: 0.8984 - val_loss: 0.3123 - val_accuracy: 0.8305
Epoch 233/1000
2/2 [==============================] - ETA: 0s - loss: 0.2860 - accuracy: 0.8281
Epoch 233: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2860 - accuracy: 0.8281 - val_loss: 0.3108 - val_accuracy: 0.8305
Epoch 234/1000
2/2 [==============================] - ETA: 0s - loss: 0.2758 - accuracy: 0.8438
Epoch 234: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 910ms/step - loss: 0.2758 - accuracy: 0.8438 - val_loss: 0.3082 - val_accuracy: 0.8305
Epoch 235/1000
2/2 [==============================] - ETA: 0s - loss: 0.2963 - accuracy: 0.8438
Epoch 235: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2963 - accuracy: 0.8438 - val_loss: 0.3071 - val_accuracy: 0.8136
Epoch 236/1000
2/2 [==============================] - ETA: 0s - loss: 0.2494 - accuracy: 0.8906
Epoch 236: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 946ms/step - loss: 0.2494 - accuracy: 0.8906 - val_loss: 0.3057 - val_accuracy: 0.8136
Epoch 237/1000
2/2 [==============================] - ETA: 0s - loss: 0.2573 - accuracy: 0.9062
Epoch 237: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 917ms/step - loss: 0.2573 - accuracy: 0.9062 - val_loss: 0.3048 - val_accuracy: 0.8136
Epoch 238/1000
2/2 [==============================] - ETA: 0s - loss: 0.2491 - accuracy: 0.8828
Epoch 238: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 921ms/step - loss: 0.2491 - accuracy: 0.8828 - val_loss: 0.3050 - val_accuracy: 0.8136
Epoch 239/1000
2/2 [==============================] - ETA: 0s - loss: 0.2366 - accuracy: 0.9000
Epoch 239: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2366 - accuracy: 0.9000 - val_loss: 0.3059 - val_accuracy: 0.8305
Epoch 240/1000
2/2 [==============================] - ETA: 0s - loss: 0.2333 - accuracy: 0.9062
Epoch 240: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 945ms/step - loss: 0.2333 - accuracy: 0.9062 - val_loss: 0.3063 - val_accuracy: 0.8475
Epoch 241/1000
2/2 [==============================] - ETA: 0s - loss: 0.2809 - accuracy: 0.8672
Epoch 241: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2809 - accuracy: 0.8672 - val_loss: 0.3059 - val_accuracy: 0.8305
Epoch 242/1000
2/2 [==============================] - ETA: 0s - loss: 0.2800 - accuracy: 0.8750
Epoch 242: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2800 - accuracy: 0.8750 - val_loss: 0.3063 - val_accuracy: 0.8475
Epoch 243/1000
2/2 [==============================] - ETA: 0s - loss: 0.2448 - accuracy: 0.9000
Epoch 243: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2448 - accuracy: 0.9000 - val_loss: 0.3057 - val_accuracy: 0.8305
Epoch 244/1000
2/2 [==============================] - ETA: 0s - loss: 0.2235 - accuracy: 0.9000
Epoch 244: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 794ms/step - loss: 0.2235 - accuracy: 0.9000 - val_loss: 0.3050 - val_accuracy: 0.8136
Epoch 245/1000
2/2 [==============================] - ETA: 0s - loss: 0.2548 - accuracy: 0.8625
Epoch 245: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2548 - accuracy: 0.8625 - val_loss: 0.3034 - val_accuracy: 0.8136
Epoch 246/1000
2/2 [==============================] - ETA: 0s - loss: 0.2482 - accuracy: 0.8672
Epoch 246: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 946ms/step - loss: 0.2482 - accuracy: 0.8672 - val_loss: 0.3021 - val_accuracy: 0.8136
Epoch 247/1000
2/2 [==============================] - ETA: 0s - loss: 0.2149 - accuracy: 0.9062
Epoch 247: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2149 - accuracy: 0.9062 - val_loss: 0.3014 - val_accuracy: 0.8136
Epoch 248/1000
2/2 [==============================] - ETA: 0s - loss: 0.2617 - accuracy: 0.8594
Epoch 248: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2617 - accuracy: 0.8594 - val_loss: 0.3010 - val_accuracy: 0.8136
Epoch 249/1000
2/2 [==============================] - ETA: 0s - loss: 0.2135 - accuracy: 0.9219
Epoch 249: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2135 - accuracy: 0.9219 - val_loss: 0.3009 - val_accuracy: 0.8136
Epoch 250/1000
2/2 [==============================] - ETA: 0s - loss: 0.2178 - accuracy: 0.9297
Epoch 250: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2178 - accuracy: 0.9297 - val_loss: 0.3010 - val_accuracy: 0.8136
Epoch 251/1000
2/2 [==============================] - ETA: 0s - loss: 0.2670 - accuracy: 0.8750
Epoch 251: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2670 - accuracy: 0.8750 - val_loss: 0.3018 - val_accuracy: 0.8136
Epoch 252/1000
2/2 [==============================] - ETA: 0s - loss: 0.2248 - accuracy: 0.8750
Epoch 252: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 818ms/step - loss: 0.2248 - accuracy: 0.8750 - val_loss: 0.3011 - val_accuracy: 0.8136
Epoch 253/1000
2/2 [==============================] - ETA: 0s - loss: 0.2740 - accuracy: 0.8828
Epoch 253: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2740 - accuracy: 0.8828 - val_loss: 0.2994 - val_accuracy: 0.8136
Epoch 254/1000
2/2 [==============================] - ETA: 0s - loss: 0.2816 - accuracy: 0.8250
Epoch 254: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 803ms/step - loss: 0.2816 - accuracy: 0.8250 - val_loss: 0.2979 - val_accuracy: 0.8136
Epoch 255/1000
2/2 [==============================] - ETA: 0s - loss: 0.2820 - accuracy: 0.8359
Epoch 255: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 947ms/step - loss: 0.2820 - accuracy: 0.8359 - val_loss: 0.2963 - val_accuracy: 0.8136
Epoch 256/1000
2/2 [==============================] - ETA: 0s - loss: 0.2573 - accuracy: 0.8594
Epoch 256: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2573 - accuracy: 0.8594 - val_loss: 0.2953 - val_accuracy: 0.8136
Epoch 257/1000
2/2 [==============================] - ETA: 0s - loss: 0.2565 - accuracy: 0.8594
Epoch 257: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2565 - accuracy: 0.8594 - val_loss: 0.2960 - val_accuracy: 0.8136
Epoch 258/1000
2/2 [==============================] - ETA: 0s - loss: 0.2307 - accuracy: 0.8984
Epoch 258: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2307 - accuracy: 0.8984 - val_loss: 0.2969 - val_accuracy: 0.8136
Epoch 259/1000
2/2 [==============================] - ETA: 0s - loss: 0.2131 - accuracy: 0.8906
Epoch 259: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2131 - accuracy: 0.8906 - val_loss: 0.2983 - val_accuracy: 0.8136
Epoch 260/1000
2/2 [==============================] - ETA: 0s - loss: 0.2280 - accuracy: 0.8906
Epoch 260: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 902ms/step - loss: 0.2280 - accuracy: 0.8906 - val_loss: 0.2995 - val_accuracy: 0.8136
Epoch 261/1000
2/2 [==============================] - ETA: 0s - loss: 0.2603 - accuracy: 0.8828
Epoch 261: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2603 - accuracy: 0.8828 - val_loss: 0.3003 - val_accuracy: 0.8136
Epoch 262/1000
2/2 [==============================] - ETA: 0s - loss: 0.2892 - accuracy: 0.8375
Epoch 262: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2892 - accuracy: 0.8375 - val_loss: 0.3015 - val_accuracy: 0.8136
Epoch 263/1000
2/2 [==============================] - ETA: 0s - loss: 0.2298 - accuracy: 0.8875
Epoch 263: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2298 - accuracy: 0.8875 - val_loss: 0.3009 - val_accuracy: 0.8136
Epoch 264/1000
2/2 [==============================] - ETA: 0s - loss: 0.2543 - accuracy: 0.9062
Epoch 264: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 958ms/step - loss: 0.2543 - accuracy: 0.9062 - val_loss: 0.3001 - val_accuracy: 0.8136
Epoch 265/1000
2/2 [==============================] - ETA: 0s - loss: 0.2106 - accuracy: 0.9375
Epoch 265: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 814ms/step - loss: 0.2106 - accuracy: 0.9375 - val_loss: 0.2987 - val_accuracy: 0.8136
Epoch 266/1000
2/2 [==============================] - ETA: 0s - loss: 0.2526 - accuracy: 0.8828
Epoch 266: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2526 - accuracy: 0.8828 - val_loss: 0.2968 - val_accuracy: 0.8136
Epoch 267/1000
2/2 [==============================] - ETA: 0s - loss: 0.2803 - accuracy: 0.8500
Epoch 267: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 853ms/step - loss: 0.2803 - accuracy: 0.8500 - val_loss: 0.2950 - val_accuracy: 0.8136
Epoch 268/1000
2/2 [==============================] - ETA: 0s - loss: 0.2660 - accuracy: 0.8750
Epoch 268: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 806ms/step - loss: 0.2660 - accuracy: 0.8750 - val_loss: 0.2931 - val_accuracy: 0.8136
Epoch 269/1000
2/2 [==============================] - ETA: 0s - loss: 0.2276 - accuracy: 0.8828
Epoch 269: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2276 - accuracy: 0.8828 - val_loss: 0.2913 - val_accuracy: 0.8136
Epoch 270/1000
2/2 [==============================] - ETA: 0s - loss: 0.2157 - accuracy: 0.9125
Epoch 270: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 860ms/step - loss: 0.2157 - accuracy: 0.9125 - val_loss: 0.2903 - val_accuracy: 0.8136
Epoch 271/1000
2/2 [==============================] - ETA: 0s - loss: 0.1974 - accuracy: 0.9375
Epoch 271: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 898ms/step - loss: 0.1974 - accuracy: 0.9375 - val_loss: 0.2898 - val_accuracy: 0.8136
Epoch 272/1000
2/2 [==============================] - ETA: 0s - loss: 0.2401 - accuracy: 0.8750
Epoch 272: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 943ms/step - loss: 0.2401 - accuracy: 0.8750 - val_loss: 0.2889 - val_accuracy: 0.8136
Epoch 273/1000
2/2 [==============================] - ETA: 0s - loss: 0.2718 - accuracy: 0.8375
Epoch 273: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2718 - accuracy: 0.8375 - val_loss: 0.2886 - val_accuracy: 0.8136
Epoch 274/1000
2/2 [==============================] - ETA: 0s - loss: 0.2322 - accuracy: 0.8984
Epoch 274: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 930ms/step - loss: 0.2322 - accuracy: 0.8984 - val_loss: 0.2888 - val_accuracy: 0.8136
Epoch 275/1000
2/2 [==============================] - ETA: 0s - loss: 0.2986 - accuracy: 0.8438
Epoch 275: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 957ms/step - loss: 0.2986 - accuracy: 0.8438 - val_loss: 0.2887 - val_accuracy: 0.8136
Epoch 276/1000
2/2 [==============================] - ETA: 0s - loss: 0.2662 - accuracy: 0.8438
Epoch 276: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2662 - accuracy: 0.8438 - val_loss: 0.2889 - val_accuracy: 0.8136
Epoch 277/1000
2/2 [==============================] - ETA: 0s - loss: 0.2386 - accuracy: 0.8984
Epoch 277: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2386 - accuracy: 0.8984 - val_loss: 0.2899 - val_accuracy: 0.8136
Epoch 278/1000
2/2 [==============================] - ETA: 0s - loss: 0.2327 - accuracy: 0.9250
Epoch 278: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2327 - accuracy: 0.9250 - val_loss: 0.2929 - val_accuracy: 0.8136
Epoch 279/1000
2/2 [==============================] - ETA: 0s - loss: 0.2378 - accuracy: 0.8984
Epoch 279: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2378 - accuracy: 0.8984 - val_loss: 0.2975 - val_accuracy: 0.8136
Epoch 280/1000
2/2 [==============================] - ETA: 0s - loss: 0.2511 - accuracy: 0.8594
Epoch 280: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2511 - accuracy: 0.8594 - val_loss: 0.3020 - val_accuracy: 0.8136
Epoch 281/1000
2/2 [==============================] - ETA: 0s - loss: 0.2288 - accuracy: 0.8984
Epoch 281: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 916ms/step - loss: 0.2288 - accuracy: 0.8984 - val_loss: 0.3068 - val_accuracy: 0.8136
Epoch 282/1000
2/2 [==============================] - ETA: 0s - loss: 0.2698 - accuracy: 0.8359
Epoch 282: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2698 - accuracy: 0.8359 - val_loss: 0.3105 - val_accuracy: 0.8136
Epoch 283/1000
2/2 [==============================] - ETA: 0s - loss: 0.2154 - accuracy: 0.9141
Epoch 283: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2154 - accuracy: 0.9141 - val_loss: 0.3148 - val_accuracy: 0.7966
Epoch 284/1000
2/2 [==============================] - ETA: 0s - loss: 0.2556 - accuracy: 0.8500
Epoch 284: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 842ms/step - loss: 0.2556 - accuracy: 0.8500 - val_loss: 0.3190 - val_accuracy: 0.7627
Epoch 285/1000
2/2 [==============================] - ETA: 0s - loss: 0.2494 - accuracy: 0.8625
Epoch 285: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 2s/step - loss: 0.2494 - accuracy: 0.8625 - val_loss: 0.3235 - val_accuracy: 0.7458
Epoch 286/1000
2/2 [==============================] - ETA: 0s - loss: 0.2026 - accuracy: 0.8875
Epoch 286: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2026 - accuracy: 0.8875 - val_loss: 0.3262 - val_accuracy: 0.7627
Epoch 287/1000
2/2 [==============================] - ETA: 0s - loss: 0.2219 - accuracy: 0.8750
Epoch 287: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2219 - accuracy: 0.8750 - val_loss: 0.3293 - val_accuracy: 0.7627
Epoch 288/1000
2/2 [==============================] - ETA: 0s - loss: 0.2030 - accuracy: 0.9141
Epoch 288: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 909ms/step - loss: 0.2030 - accuracy: 0.9141 - val_loss: 0.3301 - val_accuracy: 0.7627
Epoch 289/1000
2/2 [==============================] - ETA: 0s - loss: 0.2287 - accuracy: 0.8906
Epoch 289: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 914ms/step - loss: 0.2287 - accuracy: 0.8906 - val_loss: 0.3300 - val_accuracy: 0.7627
Epoch 290/1000
2/2 [==============================] - ETA: 0s - loss: 0.2328 - accuracy: 0.8750
Epoch 290: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 950ms/step - loss: 0.2328 - accuracy: 0.8750 - val_loss: 0.3270 - val_accuracy: 0.7797
Epoch 291/1000
2/2 [==============================] - ETA: 0s - loss: 0.2071 - accuracy: 0.9141
Epoch 291: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2071 - accuracy: 0.9141 - val_loss: 0.3240 - val_accuracy: 0.7797
Epoch 292/1000
2/2 [==============================] - ETA: 0s - loss: 0.2068 - accuracy: 0.9000
Epoch 292: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2068 - accuracy: 0.9000 - val_loss: 0.3218 - val_accuracy: 0.7797
Epoch 293/1000
2/2 [==============================] - ETA: 0s - loss: 0.1890 - accuracy: 0.9250
Epoch 293: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 804ms/step - loss: 0.1890 - accuracy: 0.9250 - val_loss: 0.3199 - val_accuracy: 0.7797
Epoch 294/1000
2/2 [==============================] - ETA: 0s - loss: 0.2426 - accuracy: 0.8875
Epoch 294: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 790ms/step - loss: 0.2426 - accuracy: 0.8875 - val_loss: 0.3161 - val_accuracy: 0.8136
Epoch 295/1000
2/2 [==============================] - ETA: 0s - loss: 0.2291 - accuracy: 0.9125
Epoch 295: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2291 - accuracy: 0.9125 - val_loss: 0.3102 - val_accuracy: 0.8475
Epoch 296/1000
2/2 [==============================] - ETA: 0s - loss: 0.2617 - accuracy: 0.8500
Epoch 296: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 824ms/step - loss: 0.2617 - accuracy: 0.8500 - val_loss: 0.3041 - val_accuracy: 0.8305
Epoch 297/1000
2/2 [==============================] - ETA: 0s - loss: 0.1950 - accuracy: 0.9500
Epoch 297: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 818ms/step - loss: 0.1950 - accuracy: 0.9500 - val_loss: 0.2988 - val_accuracy: 0.8305
Epoch 298/1000
2/2 [==============================] - ETA: 0s - loss: 0.2231 - accuracy: 0.9141
Epoch 298: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2231 - accuracy: 0.9141 - val_loss: 0.2959 - val_accuracy: 0.8305
Epoch 299/1000
2/2 [==============================] - ETA: 0s - loss: 0.1917 - accuracy: 0.9000
Epoch 299: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1917 - accuracy: 0.9000 - val_loss: 0.2945 - val_accuracy: 0.8305
Epoch 300/1000
2/2 [==============================] - ETA: 0s - loss: 0.2121 - accuracy: 0.9000
Epoch 300: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 794ms/step - loss: 0.2121 - accuracy: 0.9000 - val_loss: 0.2938 - val_accuracy: 0.8305
Epoch 301/1000
2/2 [==============================] - ETA: 0s - loss: 0.2052 - accuracy: 0.8828
Epoch 301: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2052 - accuracy: 0.8828 - val_loss: 0.2929 - val_accuracy: 0.8305
Epoch 302/1000
2/2 [==============================] - ETA: 0s - loss: 0.1914 - accuracy: 0.9375
Epoch 302: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 795ms/step - loss: 0.1914 - accuracy: 0.9375 - val_loss: 0.2915 - val_accuracy: 0.8305
Epoch 303/1000
2/2 [==============================] - ETA: 0s - loss: 0.2616 - accuracy: 0.8250
Epoch 303: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 800ms/step - loss: 0.2616 - accuracy: 0.8250 - val_loss: 0.2906 - val_accuracy: 0.8305
Epoch 304/1000
2/2 [==============================] - ETA: 0s - loss: 0.2484 - accuracy: 0.8750
Epoch 304: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2484 - accuracy: 0.8750 - val_loss: 0.2926 - val_accuracy: 0.8305
Epoch 305/1000
2/2 [==============================] - ETA: 0s - loss: 0.2136 - accuracy: 0.9062
Epoch 305: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2136 - accuracy: 0.9062 - val_loss: 0.2943 - val_accuracy: 0.8305
Epoch 306/1000
2/2 [==============================] - ETA: 0s - loss: 0.2577 - accuracy: 0.8750
Epoch 306: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 792ms/step - loss: 0.2577 - accuracy: 0.8750 - val_loss: 0.2947 - val_accuracy: 0.8305
Epoch 307/1000
2/2 [==============================] - ETA: 0s - loss: 0.2036 - accuracy: 0.9297
Epoch 307: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2036 - accuracy: 0.9297 - val_loss: 0.2952 - val_accuracy: 0.8305
Epoch 308/1000
2/2 [==============================] - ETA: 0s - loss: 0.2358 - accuracy: 0.8594
Epoch 308: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 906ms/step - loss: 0.2358 - accuracy: 0.8594 - val_loss: 0.2963 - val_accuracy: 0.8305
Epoch 309/1000
2/2 [==============================] - ETA: 0s - loss: 0.2349 - accuracy: 0.9062
Epoch 309: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2349 - accuracy: 0.9062 - val_loss: 0.2975 - val_accuracy: 0.8305
Epoch 310/1000
2/2 [==============================] - ETA: 0s - loss: 0.2118 - accuracy: 0.8625
Epoch 310: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 808ms/step - loss: 0.2118 - accuracy: 0.8625 - val_loss: 0.2989 - val_accuracy: 0.8305
Epoch 311/1000
2/2 [==============================] - ETA: 0s - loss: 0.1725 - accuracy: 0.9000
Epoch 311: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1725 - accuracy: 0.9000 - val_loss: 0.2993 - val_accuracy: 0.8305
Epoch 312/1000
2/2 [==============================] - ETA: 0s - loss: 0.2201 - accuracy: 0.9125
Epoch 312: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2201 - accuracy: 0.9125 - val_loss: 0.3002 - val_accuracy: 0.8305
Epoch 313/1000
2/2 [==============================] - ETA: 0s - loss: 0.2136 - accuracy: 0.8750
Epoch 313: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2136 - accuracy: 0.8750 - val_loss: 0.3005 - val_accuracy: 0.8305
Epoch 314/1000
2/2 [==============================] - ETA: 0s - loss: 0.2057 - accuracy: 0.8906
Epoch 314: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 934ms/step - loss: 0.2057 - accuracy: 0.8906 - val_loss: 0.3016 - val_accuracy: 0.8305
Epoch 315/1000
2/2 [==============================] - ETA: 0s - loss: 0.2134 - accuracy: 0.8984
Epoch 315: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 968ms/step - loss: 0.2134 - accuracy: 0.8984 - val_loss: 0.3029 - val_accuracy: 0.8305
Epoch 316/1000
2/2 [==============================] - ETA: 0s - loss: 0.2028 - accuracy: 0.9375
Epoch 316: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2028 - accuracy: 0.9375 - val_loss: 0.3031 - val_accuracy: 0.8305
Epoch 317/1000
2/2 [==============================] - ETA: 0s - loss: 0.2105 - accuracy: 0.8750
Epoch 317: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2105 - accuracy: 0.8750 - val_loss: 0.3014 - val_accuracy: 0.8305
Epoch 318/1000
2/2 [==============================] - ETA: 0s - loss: 0.2106 - accuracy: 0.8984
Epoch 318: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 918ms/step - loss: 0.2106 - accuracy: 0.8984 - val_loss: 0.3000 - val_accuracy: 0.8305
Epoch 319/1000
2/2 [==============================] - ETA: 0s - loss: 0.1630 - accuracy: 0.9750
Epoch 319: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 796ms/step - loss: 0.1630 - accuracy: 0.9750 - val_loss: 0.3004 - val_accuracy: 0.8305
Epoch 320/1000
2/2 [==============================] - ETA: 0s - loss: 0.1539 - accuracy: 0.9500
Epoch 320: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 810ms/step - loss: 0.1539 - accuracy: 0.9500 - val_loss: 0.3006 - val_accuracy: 0.8305
Epoch 321/1000
2/2 [==============================] - ETA: 0s - loss: 0.2218 - accuracy: 0.8594
Epoch 321: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2218 - accuracy: 0.8594 - val_loss: 0.3013 - val_accuracy: 0.8305
Epoch 322/1000
2/2 [==============================] - ETA: 0s - loss: 0.2165 - accuracy: 0.9062
Epoch 322: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2165 - accuracy: 0.9062 - val_loss: 0.3022 - val_accuracy: 0.8305
Epoch 323/1000
2/2 [==============================] - ETA: 0s - loss: 0.1919 - accuracy: 0.9000
Epoch 323: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1919 - accuracy: 0.9000 - val_loss: 0.3030 - val_accuracy: 0.8305
Epoch 324/1000
2/2 [==============================] - ETA: 0s - loss: 0.1958 - accuracy: 0.9000
Epoch 324: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 850ms/step - loss: 0.1958 - accuracy: 0.9000 - val_loss: 0.3028 - val_accuracy: 0.8305
Epoch 325/1000
2/2 [==============================] - ETA: 0s - loss: 0.1868 - accuracy: 0.9000
Epoch 325: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 814ms/step - loss: 0.1868 - accuracy: 0.9000 - val_loss: 0.3007 - val_accuracy: 0.8305
Epoch 326/1000
2/2 [==============================] - ETA: 0s - loss: 0.2316 - accuracy: 0.9062
Epoch 326: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 941ms/step - loss: 0.2316 - accuracy: 0.9062 - val_loss: 0.2972 - val_accuracy: 0.8305
Epoch 327/1000
2/2 [==============================] - ETA: 0s - loss: 0.2059 - accuracy: 0.8875
Epoch 327: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2059 - accuracy: 0.8875 - val_loss: 0.2908 - val_accuracy: 0.8305
Epoch 328/1000
2/2 [==============================] - ETA: 0s - loss: 0.1977 - accuracy: 0.8906
Epoch 328: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 969ms/step - loss: 0.1977 - accuracy: 0.8906 - val_loss: 0.2869 - val_accuracy: 0.8305
Epoch 329/1000
2/2 [==============================] - ETA: 0s - loss: 0.2260 - accuracy: 0.8984
Epoch 329: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 992ms/step - loss: 0.2260 - accuracy: 0.8984 - val_loss: 0.2843 - val_accuracy: 0.8305
Epoch 330/1000
2/2 [==============================] - ETA: 0s - loss: 0.2437 - accuracy: 0.8625
Epoch 330: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2437 - accuracy: 0.8625 - val_loss: 0.2842 - val_accuracy: 0.8305
Epoch 331/1000
2/2 [==============================] - ETA: 0s - loss: 0.2069 - accuracy: 0.8984
Epoch 331: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 935ms/step - loss: 0.2069 - accuracy: 0.8984 - val_loss: 0.2851 - val_accuracy: 0.8305
Epoch 332/1000
2/2 [==============================] - ETA: 0s - loss: 0.1874 - accuracy: 0.9000
Epoch 332: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 869ms/step - loss: 0.1874 - accuracy: 0.9000 - val_loss: 0.2855 - val_accuracy: 0.8305
Epoch 333/1000
2/2 [==============================] - ETA: 0s - loss: 0.1848 - accuracy: 0.9125
Epoch 333: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 787ms/step - loss: 0.1848 - accuracy: 0.9125 - val_loss: 0.2884 - val_accuracy: 0.8305
Epoch 334/1000
2/2 [==============================] - ETA: 0s - loss: 0.2140 - accuracy: 0.8984
Epoch 334: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2140 - accuracy: 0.8984 - val_loss: 0.2922 - val_accuracy: 0.8305
Epoch 335/1000
2/2 [==============================] - ETA: 0s - loss: 0.2155 - accuracy: 0.8594
Epoch 335: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 998ms/step - loss: 0.2155 - accuracy: 0.8594 - val_loss: 0.2948 - val_accuracy: 0.8305
Epoch 336/1000
2/2 [==============================] - ETA: 0s - loss: 0.2458 - accuracy: 0.8625
Epoch 336: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 826ms/step - loss: 0.2458 - accuracy: 0.8625 - val_loss: 0.2973 - val_accuracy: 0.8305
Epoch 337/1000
2/2 [==============================] - ETA: 0s - loss: 0.1843 - accuracy: 0.9125
Epoch 337: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 812ms/step - loss: 0.1843 - accuracy: 0.9125 - val_loss: 0.3001 - val_accuracy: 0.8136
Epoch 338/1000
2/2 [==============================] - ETA: 0s - loss: 0.2171 - accuracy: 0.9000
Epoch 338: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 847ms/step - loss: 0.2171 - accuracy: 0.9000 - val_loss: 0.3006 - val_accuracy: 0.8136
Epoch 339/1000
2/2 [==============================] - ETA: 0s - loss: 0.2334 - accuracy: 0.8500
Epoch 339: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2334 - accuracy: 0.8500 - val_loss: 0.3007 - val_accuracy: 0.8136
Epoch 340/1000
2/2 [==============================] - ETA: 0s - loss: 0.1649 - accuracy: 0.9531
Epoch 340: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 921ms/step - loss: 0.1649 - accuracy: 0.9531 - val_loss: 0.3008 - val_accuracy: 0.8136
Epoch 341/1000
2/2 [==============================] - ETA: 0s - loss: 0.1953 - accuracy: 0.8984
Epoch 341: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1953 - accuracy: 0.8984 - val_loss: 0.3000 - val_accuracy: 0.8136
Epoch 342/1000
2/2 [==============================] - ETA: 0s - loss: 0.1953 - accuracy: 0.8875
Epoch 342: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 820ms/step - loss: 0.1953 - accuracy: 0.8875 - val_loss: 0.2995 - val_accuracy: 0.8136
Epoch 343/1000
2/2 [==============================] - ETA: 0s - loss: 0.2022 - accuracy: 0.8906
Epoch 343: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 931ms/step - loss: 0.2022 - accuracy: 0.8906 - val_loss: 0.2981 - val_accuracy: 0.8136
Epoch 344/1000
2/2 [==============================] - ETA: 0s - loss: 0.2112 - accuracy: 0.8875
Epoch 344: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2112 - accuracy: 0.8875 - val_loss: 0.2967 - val_accuracy: 0.8136
Epoch 345/1000
2/2 [==============================] - ETA: 0s - loss: 0.2026 - accuracy: 0.9125
Epoch 345: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2026 - accuracy: 0.9125 - val_loss: 0.2950 - val_accuracy: 0.8136
Epoch 346/1000
2/2 [==============================] - ETA: 0s - loss: 0.2523 - accuracy: 0.8500
Epoch 346: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2523 - accuracy: 0.8500 - val_loss: 0.2945 - val_accuracy: 0.8136
Epoch 347/1000
2/2 [==============================] - ETA: 0s - loss: 0.1992 - accuracy: 0.8906
Epoch 347: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1992 - accuracy: 0.8906 - val_loss: 0.2937 - val_accuracy: 0.8136
Epoch 348/1000
2/2 [==============================] - ETA: 0s - loss: 0.2214 - accuracy: 0.8906
Epoch 348: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2214 - accuracy: 0.8906 - val_loss: 0.2934 - val_accuracy: 0.8136
Epoch 349/1000
2/2 [==============================] - ETA: 0s - loss: 0.1557 - accuracy: 0.9375
Epoch 349: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1557 - accuracy: 0.9375 - val_loss: 0.2937 - val_accuracy: 0.8136
Epoch 350/1000
2/2 [==============================] - ETA: 0s - loss: 0.2254 - accuracy: 0.8828
Epoch 350: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2254 - accuracy: 0.8828 - val_loss: 0.2925 - val_accuracy: 0.8136
Epoch 351/1000
2/2 [==============================] - ETA: 0s - loss: 0.2194 - accuracy: 0.8906
Epoch 351: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 891ms/step - loss: 0.2194 - accuracy: 0.8906 - val_loss: 0.2909 - val_accuracy: 0.8136
Epoch 352/1000
2/2 [==============================] - ETA: 0s - loss: 0.2548 - accuracy: 0.8750
Epoch 352: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 963ms/step - loss: 0.2548 - accuracy: 0.8750 - val_loss: 0.2898 - val_accuracy: 0.8136
Epoch 353/1000
2/2 [==============================] - ETA: 0s - loss: 0.2142 - accuracy: 0.9062
Epoch 353: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2142 - accuracy: 0.9062 - val_loss: 0.2904 - val_accuracy: 0.8136
Epoch 354/1000
2/2 [==============================] - ETA: 0s - loss: 0.2285 - accuracy: 0.8984
Epoch 354: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2285 - accuracy: 0.8984 - val_loss: 0.2903 - val_accuracy: 0.8136
Epoch 355/1000
2/2 [==============================] - ETA: 0s - loss: 0.1971 - accuracy: 0.9250
Epoch 355: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 813ms/step - loss: 0.1971 - accuracy: 0.9250 - val_loss: 0.2898 - val_accuracy: 0.8136
Epoch 356/1000
2/2 [==============================] - ETA: 0s - loss: 0.1707 - accuracy: 0.9125
Epoch 356: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 804ms/step - loss: 0.1707 - accuracy: 0.9125 - val_loss: 0.2897 - val_accuracy: 0.7966
Epoch 357/1000
2/2 [==============================] - ETA: 0s - loss: 0.1891 - accuracy: 0.9297
Epoch 357: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1891 - accuracy: 0.9297 - val_loss: 0.2902 - val_accuracy: 0.7966
Epoch 358/1000
2/2 [==============================] - ETA: 0s - loss: 0.2287 - accuracy: 0.8906
Epoch 358: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 916ms/step - loss: 0.2287 - accuracy: 0.8906 - val_loss: 0.2905 - val_accuracy: 0.7966
Epoch 359/1000
2/2 [==============================] - ETA: 0s - loss: 0.1855 - accuracy: 0.9000
Epoch 359: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 808ms/step - loss: 0.1855 - accuracy: 0.9000 - val_loss: 0.2893 - val_accuracy: 0.7966
Epoch 360/1000
2/2 [==============================] - ETA: 0s - loss: 0.1888 - accuracy: 0.9000
Epoch 360: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1888 - accuracy: 0.9000 - val_loss: 0.2888 - val_accuracy: 0.7966
Epoch 361/1000
2/2 [==============================] - ETA: 0s - loss: 0.1960 - accuracy: 0.8906
Epoch 361: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 937ms/step - loss: 0.1960 - accuracy: 0.8906 - val_loss: 0.2888 - val_accuracy: 0.8136
Epoch 362/1000
2/2 [==============================] - ETA: 0s - loss: 0.1805 - accuracy: 0.9219
Epoch 362: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1805 - accuracy: 0.9219 - val_loss: 0.2886 - val_accuracy: 0.8136
Epoch 363/1000
2/2 [==============================] - ETA: 0s - loss: 0.2204 - accuracy: 0.8438
Epoch 363: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2204 - accuracy: 0.8438 - val_loss: 0.2874 - val_accuracy: 0.8136
Epoch 364/1000
2/2 [==============================] - ETA: 0s - loss: 0.2377 - accuracy: 0.8750
Epoch 364: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2377 - accuracy: 0.8750 - val_loss: 0.2852 - val_accuracy: 0.8305
Epoch 365/1000
2/2 [==============================] - ETA: 0s - loss: 0.2509 - accuracy: 0.8359
Epoch 365: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2509 - accuracy: 0.8359 - val_loss: 0.2844 - val_accuracy: 0.8305
Epoch 366/1000
2/2 [==============================] - ETA: 0s - loss: 0.2157 - accuracy: 0.9062
Epoch 366: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 937ms/step - loss: 0.2157 - accuracy: 0.9062 - val_loss: 0.2826 - val_accuracy: 0.8305
Epoch 367/1000
2/2 [==============================] - ETA: 0s - loss: 0.2052 - accuracy: 0.9062
Epoch 367: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2052 - accuracy: 0.9062 - val_loss: 0.2812 - val_accuracy: 0.8305
Epoch 368/1000
2/2 [==============================] - ETA: 0s - loss: 0.1466 - accuracy: 0.9766
Epoch 368: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 914ms/step - loss: 0.1466 - accuracy: 0.9766 - val_loss: 0.2792 - val_accuracy: 0.8475
Epoch 369/1000
2/2 [==============================] - ETA: 0s - loss: 0.2298 - accuracy: 0.8672
Epoch 369: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2298 - accuracy: 0.8672 - val_loss: 0.2770 - val_accuracy: 0.8305
Epoch 370/1000
2/2 [==============================] - ETA: 0s - loss: 0.2274 - accuracy: 0.8984
Epoch 370: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2274 - accuracy: 0.8984 - val_loss: 0.2750 - val_accuracy: 0.8305
Epoch 371/1000
2/2 [==============================] - ETA: 0s - loss: 0.2067 - accuracy: 0.8875
Epoch 371: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 811ms/step - loss: 0.2067 - accuracy: 0.8875 - val_loss: 0.2723 - val_accuracy: 0.8305
Epoch 372/1000
2/2 [==============================] - ETA: 0s - loss: 0.1376 - accuracy: 0.9250
Epoch 372: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 806ms/step - loss: 0.1376 - accuracy: 0.9250 - val_loss: 0.2710 - val_accuracy: 0.8305
Epoch 373/1000
2/2 [==============================] - ETA: 0s - loss: 0.1334 - accuracy: 0.9766
Epoch 373: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1334 - accuracy: 0.9766 - val_loss: 0.2704 - val_accuracy: 0.8305
Epoch 374/1000
2/2 [==============================] - ETA: 0s - loss: 0.1969 - accuracy: 0.9062
Epoch 374: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1969 - accuracy: 0.9062 - val_loss: 0.2690 - val_accuracy: 0.8305
Epoch 375/1000
2/2 [==============================] - ETA: 0s - loss: 0.1532 - accuracy: 0.9250
Epoch 375: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1532 - accuracy: 0.9250 - val_loss: 0.2681 - val_accuracy: 0.8305
Epoch 376/1000
2/2 [==============================] - ETA: 0s - loss: 0.1761 - accuracy: 0.9375
Epoch 376: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1761 - accuracy: 0.9375 - val_loss: 0.2677 - val_accuracy: 0.8305
Epoch 377/1000
2/2 [==============================] - ETA: 0s - loss: 0.1927 - accuracy: 0.9219
Epoch 377: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 916ms/step - loss: 0.1927 - accuracy: 0.9219 - val_loss: 0.2674 - val_accuracy: 0.8305
Epoch 378/1000
2/2 [==============================] - ETA: 0s - loss: 0.1983 - accuracy: 0.9297
Epoch 378: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1983 - accuracy: 0.9297 - val_loss: 0.2671 - val_accuracy: 0.8305
Epoch 379/1000
2/2 [==============================] - ETA: 0s - loss: 0.1826 - accuracy: 0.9375
Epoch 379: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 806ms/step - loss: 0.1826 - accuracy: 0.9375 - val_loss: 0.2670 - val_accuracy: 0.8305
Epoch 380/1000
2/2 [==============================] - ETA: 0s - loss: 0.1814 - accuracy: 0.8875
Epoch 380: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 803ms/step - loss: 0.1814 - accuracy: 0.8875 - val_loss: 0.2679 - val_accuracy: 0.8305
Epoch 381/1000
2/2 [==============================] - ETA: 0s - loss: 0.1725 - accuracy: 0.9125
Epoch 381: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 797ms/step - loss: 0.1725 - accuracy: 0.9125 - val_loss: 0.2694 - val_accuracy: 0.8305
Epoch 382/1000
2/2 [==============================] - ETA: 0s - loss: 0.1709 - accuracy: 0.9219
Epoch 382: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 948ms/step - loss: 0.1709 - accuracy: 0.9219 - val_loss: 0.2718 - val_accuracy: 0.8305
Epoch 383/1000
2/2 [==============================] - ETA: 0s - loss: 0.1744 - accuracy: 0.9125
Epoch 383: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 988ms/step - loss: 0.1744 - accuracy: 0.9125 - val_loss: 0.2752 - val_accuracy: 0.8305
Epoch 384/1000
2/2 [==============================] - ETA: 0s - loss: 0.1834 - accuracy: 0.9250
Epoch 384: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 815ms/step - loss: 0.1834 - accuracy: 0.9250 - val_loss: 0.2793 - val_accuracy: 0.8136
Epoch 385/1000
2/2 [==============================] - ETA: 0s - loss: 0.1865 - accuracy: 0.9297
Epoch 385: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1865 - accuracy: 0.9297 - val_loss: 0.2834 - val_accuracy: 0.8136
Epoch 386/1000
2/2 [==============================] - ETA: 0s - loss: 0.2197 - accuracy: 0.8750
Epoch 386: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2197 - accuracy: 0.8750 - val_loss: 0.2869 - val_accuracy: 0.8305
Epoch 387/1000
2/2 [==============================] - ETA: 0s - loss: 0.1715 - accuracy: 0.9141
Epoch 387: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 938ms/step - loss: 0.1715 - accuracy: 0.9141 - val_loss: 0.2888 - val_accuracy: 0.8305
Epoch 388/1000
2/2 [==============================] - ETA: 0s - loss: 0.1848 - accuracy: 0.8750
Epoch 388: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 815ms/step - loss: 0.1848 - accuracy: 0.8750 - val_loss: 0.2891 - val_accuracy: 0.8305
Epoch 389/1000
2/2 [==============================] - ETA: 0s - loss: 0.2054 - accuracy: 0.9219
Epoch 389: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2054 - accuracy: 0.9219 - val_loss: 0.2882 - val_accuracy: 0.8305
Epoch 390/1000
2/2 [==============================] - ETA: 0s - loss: 0.1498 - accuracy: 0.9500
Epoch 390: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1498 - accuracy: 0.9500 - val_loss: 0.2871 - val_accuracy: 0.8305
Epoch 391/1000
2/2 [==============================] - ETA: 0s - loss: 0.1969 - accuracy: 0.9125
Epoch 391: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 796ms/step - loss: 0.1969 - accuracy: 0.9125 - val_loss: 0.2851 - val_accuracy: 0.8305
Epoch 392/1000
2/2 [==============================] - ETA: 0s - loss: 0.1831 - accuracy: 0.9125
Epoch 392: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1831 - accuracy: 0.9125 - val_loss: 0.2831 - val_accuracy: 0.8305
Epoch 393/1000
2/2 [==============================] - ETA: 0s - loss: 0.2146 - accuracy: 0.8625
Epoch 393: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 811ms/step - loss: 0.2146 - accuracy: 0.8625 - val_loss: 0.2820 - val_accuracy: 0.8305
Epoch 394/1000
2/2 [==============================] - ETA: 0s - loss: 0.1512 - accuracy: 0.9375
Epoch 394: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 797ms/step - loss: 0.1512 - accuracy: 0.9375 - val_loss: 0.2816 - val_accuracy: 0.8305
Epoch 395/1000
2/2 [==============================] - ETA: 0s - loss: 0.1887 - accuracy: 0.8984
Epoch 395: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1887 - accuracy: 0.8984 - val_loss: 0.2810 - val_accuracy: 0.8305
Epoch 396/1000
2/2 [==============================] - ETA: 0s - loss: 0.1964 - accuracy: 0.9250
Epoch 396: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 805ms/step - loss: 0.1964 - accuracy: 0.9250 - val_loss: 0.2817 - val_accuracy: 0.8305
Epoch 397/1000
2/2 [==============================] - ETA: 0s - loss: 0.1661 - accuracy: 0.9219
Epoch 397: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 969ms/step - loss: 0.1661 - accuracy: 0.9219 - val_loss: 0.2819 - val_accuracy: 0.8136
Epoch 398/1000
2/2 [==============================] - ETA: 0s - loss: 0.1866 - accuracy: 0.9219
Epoch 398: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1866 - accuracy: 0.9219 - val_loss: 0.2835 - val_accuracy: 0.8136
Epoch 399/1000
2/2 [==============================] - ETA: 0s - loss: 0.1613 - accuracy: 0.9453
Epoch 399: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1613 - accuracy: 0.9453 - val_loss: 0.2854 - val_accuracy: 0.8136
Epoch 400/1000
2/2 [==============================] - ETA: 0s - loss: 0.1936 - accuracy: 0.9000
Epoch 400: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1936 - accuracy: 0.9000 - val_loss: 0.2866 - val_accuracy: 0.8136
Epoch 401/1000
2/2 [==============================] - ETA: 0s - loss: 0.1871 - accuracy: 0.9219
Epoch 401: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1871 - accuracy: 0.9219 - val_loss: 0.2878 - val_accuracy: 0.7966
Epoch 402/1000
2/2 [==============================] - ETA: 0s - loss: 0.1557 - accuracy: 0.9375
Epoch 402: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1557 - accuracy: 0.9375 - val_loss: 0.2889 - val_accuracy: 0.7966
Epoch 403/1000
2/2 [==============================] - ETA: 0s - loss: 0.1863 - accuracy: 0.9125
Epoch 403: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 822ms/step - loss: 0.1863 - accuracy: 0.9125 - val_loss: 0.2906 - val_accuracy: 0.8136
Epoch 404/1000
2/2 [==============================] - ETA: 0s - loss: 0.1650 - accuracy: 0.9297
Epoch 404: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 948ms/step - loss: 0.1650 - accuracy: 0.9297 - val_loss: 0.2921 - val_accuracy: 0.8136
Epoch 405/1000
2/2 [==============================] - ETA: 0s - loss: 0.1796 - accuracy: 0.9141
Epoch 405: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 956ms/step - loss: 0.1796 - accuracy: 0.9141 - val_loss: 0.2936 - val_accuracy: 0.8136
Epoch 406/1000
2/2 [==============================] - ETA: 0s - loss: 0.1615 - accuracy: 0.9531
Epoch 406: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1615 - accuracy: 0.9531 - val_loss: 0.2949 - val_accuracy: 0.8136
Epoch 407/1000
2/2 [==============================] - ETA: 0s - loss: 0.1877 - accuracy: 0.9141
Epoch 407: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1877 - accuracy: 0.9141 - val_loss: 0.2954 - val_accuracy: 0.8136
Epoch 408/1000
2/2 [==============================] - ETA: 0s - loss: 0.2060 - accuracy: 0.8875
Epoch 408: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2060 - accuracy: 0.8875 - val_loss: 0.2953 - val_accuracy: 0.8136
Epoch 409/1000
2/2 [==============================] - ETA: 0s - loss: 0.1334 - accuracy: 0.9688
Epoch 409: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 943ms/step - loss: 0.1334 - accuracy: 0.9688 - val_loss: 0.2956 - val_accuracy: 0.8136
Epoch 410/1000
2/2 [==============================] - ETA: 0s - loss: 0.1217 - accuracy: 0.9500
Epoch 410: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 818ms/step - loss: 0.1217 - accuracy: 0.9500 - val_loss: 0.2970 - val_accuracy: 0.8136
Epoch 411/1000
2/2 [==============================] - ETA: 0s - loss: 0.1435 - accuracy: 0.9609
Epoch 411: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 956ms/step - loss: 0.1435 - accuracy: 0.9609 - val_loss: 0.2978 - val_accuracy: 0.8136
Epoch 412/1000
2/2 [==============================] - ETA: 0s - loss: 0.2369 - accuracy: 0.8875
Epoch 412: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2369 - accuracy: 0.8875 - val_loss: 0.2975 - val_accuracy: 0.8136
Epoch 413/1000
2/2 [==============================] - ETA: 0s - loss: 0.1769 - accuracy: 0.9062
Epoch 413: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 925ms/step - loss: 0.1769 - accuracy: 0.9062 - val_loss: 0.2976 - val_accuracy: 0.8136
Epoch 414/1000
2/2 [==============================] - ETA: 0s - loss: 0.1529 - accuracy: 0.9297
Epoch 414: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1529 - accuracy: 0.9297 - val_loss: 0.2980 - val_accuracy: 0.8136
Epoch 415/1000
2/2 [==============================] - ETA: 0s - loss: 0.1929 - accuracy: 0.9141
Epoch 415: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1929 - accuracy: 0.9141 - val_loss: 0.2981 - val_accuracy: 0.8136
Epoch 416/1000
2/2 [==============================] - ETA: 0s - loss: 0.1664 - accuracy: 0.9375
Epoch 416: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1664 - accuracy: 0.9375 - val_loss: 0.2983 - val_accuracy: 0.8136
Epoch 417/1000
2/2 [==============================] - ETA: 0s - loss: 0.1497 - accuracy: 0.9500
Epoch 417: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 802ms/step - loss: 0.1497 - accuracy: 0.9500 - val_loss: 0.2982 - val_accuracy: 0.8136
Epoch 418/1000
2/2 [==============================] - ETA: 0s - loss: 0.1411 - accuracy: 0.9500
Epoch 418: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1411 - accuracy: 0.9500 - val_loss: 0.2985 - val_accuracy: 0.8136
Epoch 419/1000
2/2 [==============================] - ETA: 0s - loss: 0.2223 - accuracy: 0.8750
Epoch 419: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2223 - accuracy: 0.8750 - val_loss: 0.2979 - val_accuracy: 0.8136
Epoch 420/1000
2/2 [==============================] - ETA: 0s - loss: 0.2264 - accuracy: 0.8750
Epoch 420: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 940ms/step - loss: 0.2264 - accuracy: 0.8750 - val_loss: 0.2962 - val_accuracy: 0.8136
Epoch 421/1000
2/2 [==============================] - ETA: 0s - loss: 0.1621 - accuracy: 0.9219
Epoch 421: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 898ms/step - loss: 0.1621 - accuracy: 0.9219 - val_loss: 0.2952 - val_accuracy: 0.8136
Epoch 422/1000
2/2 [==============================] - ETA: 0s - loss: 0.1696 - accuracy: 0.9500
Epoch 422: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1696 - accuracy: 0.9500 - val_loss: 0.2945 - val_accuracy: 0.8305
Epoch 423/1000
2/2 [==============================] - ETA: 0s - loss: 0.2096 - accuracy: 0.8984
Epoch 423: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2096 - accuracy: 0.8984 - val_loss: 0.2934 - val_accuracy: 0.8305
Epoch 424/1000
2/2 [==============================] - ETA: 0s - loss: 0.2152 - accuracy: 0.9000
Epoch 424: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2152 - accuracy: 0.9000 - val_loss: 0.2935 - val_accuracy: 0.8305
Epoch 425/1000
2/2 [==============================] - ETA: 0s - loss: 0.1662 - accuracy: 0.9297
Epoch 425: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 902ms/step - loss: 0.1662 - accuracy: 0.9297 - val_loss: 0.2931 - val_accuracy: 0.8305
Epoch 426/1000
2/2 [==============================] - ETA: 0s - loss: 0.1505 - accuracy: 0.9297
Epoch 426: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 904ms/step - loss: 0.1505 - accuracy: 0.9297 - val_loss: 0.2917 - val_accuracy: 0.8305
Epoch 427/1000
2/2 [==============================] - ETA: 0s - loss: 0.1576 - accuracy: 0.9375
Epoch 427: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1576 - accuracy: 0.9375 - val_loss: 0.2896 - val_accuracy: 0.8305
Epoch 428/1000
2/2 [==============================] - ETA: 0s - loss: 0.2311 - accuracy: 0.8625
Epoch 428: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 800ms/step - loss: 0.2311 - accuracy: 0.8625 - val_loss: 0.2872 - val_accuracy: 0.8305
Epoch 429/1000
2/2 [==============================] - ETA: 0s - loss: 0.1310 - accuracy: 0.9125
Epoch 429: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 815ms/step - loss: 0.1310 - accuracy: 0.9125 - val_loss: 0.2852 - val_accuracy: 0.8305
Epoch 430/1000
2/2 [==============================] - ETA: 0s - loss: 0.1362 - accuracy: 0.9625
Epoch 430: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 809ms/step - loss: 0.1362 - accuracy: 0.9625 - val_loss: 0.2846 - val_accuracy: 0.8305
Epoch 431/1000
2/2 [==============================] - ETA: 0s - loss: 0.1907 - accuracy: 0.8672
Epoch 431: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 970ms/step - loss: 0.1907 - accuracy: 0.8672 - val_loss: 0.2838 - val_accuracy: 0.8305
Epoch 432/1000
2/2 [==============================] - ETA: 0s - loss: 0.1620 - accuracy: 0.9375
Epoch 432: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1620 - accuracy: 0.9375 - val_loss: 0.2835 - val_accuracy: 0.8305
Epoch 433/1000
2/2 [==============================] - ETA: 0s - loss: 0.1835 - accuracy: 0.9000
Epoch 433: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 804ms/step - loss: 0.1835 - accuracy: 0.9000 - val_loss: 0.2827 - val_accuracy: 0.8305
Epoch 434/1000
2/2 [==============================] - ETA: 0s - loss: 0.1855 - accuracy: 0.8875
Epoch 434: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 819ms/step - loss: 0.1855 - accuracy: 0.8875 - val_loss: 0.2822 - val_accuracy: 0.8305
Epoch 435/1000
2/2 [==============================] - ETA: 0s - loss: 0.1618 - accuracy: 0.9453
Epoch 435: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1618 - accuracy: 0.9453 - val_loss: 0.2819 - val_accuracy: 0.8305
Epoch 436/1000
2/2 [==============================] - ETA: 0s - loss: 0.1945 - accuracy: 0.9000
Epoch 436: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 824ms/step - loss: 0.1945 - accuracy: 0.9000 - val_loss: 0.2820 - val_accuracy: 0.8305
Epoch 437/1000
2/2 [==============================] - ETA: 0s - loss: 0.1356 - accuracy: 0.9766
Epoch 437: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1356 - accuracy: 0.9766 - val_loss: 0.2816 - val_accuracy: 0.8305
Epoch 438/1000
2/2 [==============================] - ETA: 0s - loss: 0.1677 - accuracy: 0.9125
Epoch 438: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1677 - accuracy: 0.9125 - val_loss: 0.2828 - val_accuracy: 0.8305
Epoch 439/1000
2/2 [==============================] - ETA: 0s - loss: 0.1504 - accuracy: 0.9219
Epoch 439: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 953ms/step - loss: 0.1504 - accuracy: 0.9219 - val_loss: 0.2843 - val_accuracy: 0.8305
Epoch 440/1000
2/2 [==============================] - ETA: 0s - loss: 0.2032 - accuracy: 0.8875
Epoch 440: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 842ms/step - loss: 0.2032 - accuracy: 0.8875 - val_loss: 0.2862 - val_accuracy: 0.8305
Epoch 441/1000
2/2 [==============================] - ETA: 0s - loss: 0.1492 - accuracy: 0.9625
Epoch 441: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 815ms/step - loss: 0.1492 - accuracy: 0.9625 - val_loss: 0.2884 - val_accuracy: 0.8305
Epoch 442/1000
2/2 [==============================] - ETA: 0s - loss: 0.1689 - accuracy: 0.9125
Epoch 442: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 819ms/step - loss: 0.1689 - accuracy: 0.9125 - val_loss: 0.2880 - val_accuracy: 0.8305
Epoch 443/1000
2/2 [==============================] - ETA: 0s - loss: 0.1659 - accuracy: 0.9250
Epoch 443: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1659 - accuracy: 0.9250 - val_loss: 0.2883 - val_accuracy: 0.8305
Epoch 444/1000
2/2 [==============================] - ETA: 0s - loss: 0.2104 - accuracy: 0.8828
Epoch 444: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 949ms/step - loss: 0.2104 - accuracy: 0.8828 - val_loss: 0.2863 - val_accuracy: 0.8305
Epoch 445/1000
2/2 [==============================] - ETA: 0s - loss: 0.1544 - accuracy: 0.9219
Epoch 445: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 942ms/step - loss: 0.1544 - accuracy: 0.9219 - val_loss: 0.2832 - val_accuracy: 0.8305
Epoch 446/1000
2/2 [==============================] - ETA: 0s - loss: 0.1321 - accuracy: 0.9766
Epoch 446: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 938ms/step - loss: 0.1321 - accuracy: 0.9766 - val_loss: 0.2813 - val_accuracy: 0.8305
Epoch 447/1000
2/2 [==============================] - ETA: 0s - loss: 0.1680 - accuracy: 0.9125
Epoch 447: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1680 - accuracy: 0.9125 - val_loss: 0.2811 - val_accuracy: 0.8136
Epoch 448/1000
2/2 [==============================] - ETA: 0s - loss: 0.1816 - accuracy: 0.9141
Epoch 448: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1816 - accuracy: 0.9141 - val_loss: 0.2806 - val_accuracy: 0.8136
Epoch 449/1000
2/2 [==============================] - ETA: 0s - loss: 0.1797 - accuracy: 0.9000
Epoch 449: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1797 - accuracy: 0.9000 - val_loss: 0.2814 - val_accuracy: 0.8136
Epoch 450/1000
2/2 [==============================] - ETA: 0s - loss: 0.1986 - accuracy: 0.8750
Epoch 450: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1986 - accuracy: 0.8750 - val_loss: 0.2840 - val_accuracy: 0.8136
Epoch 451/1000
2/2 [==============================] - ETA: 0s - loss: 0.1813 - accuracy: 0.8984
Epoch 451: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1813 - accuracy: 0.8984 - val_loss: 0.2866 - val_accuracy: 0.8136
Epoch 452/1000
2/2 [==============================] - ETA: 0s - loss: 0.2064 - accuracy: 0.8375
Epoch 452: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 833ms/step - loss: 0.2064 - accuracy: 0.8375 - val_loss: 0.2891 - val_accuracy: 0.8136
Epoch 453/1000
2/2 [==============================] - ETA: 0s - loss: 0.1394 - accuracy: 0.9625
Epoch 453: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 831ms/step - loss: 0.1394 - accuracy: 0.9625 - val_loss: 0.2909 - val_accuracy: 0.8136
Epoch 454/1000
2/2 [==============================] - ETA: 0s - loss: 0.1555 - accuracy: 0.9375
Epoch 454: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1555 - accuracy: 0.9375 - val_loss: 0.2903 - val_accuracy: 0.8136
Epoch 455/1000
2/2 [==============================] - ETA: 0s - loss: 0.1647 - accuracy: 0.9375
Epoch 455: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 874ms/step - loss: 0.1647 - accuracy: 0.9375 - val_loss: 0.2888 - val_accuracy: 0.8136
Epoch 456/1000
2/2 [==============================] - ETA: 0s - loss: 0.2253 - accuracy: 0.8625
Epoch 456: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2253 - accuracy: 0.8625 - val_loss: 0.2889 - val_accuracy: 0.8136
Epoch 457/1000
2/2 [==============================] - ETA: 0s - loss: 0.1515 - accuracy: 0.9625
Epoch 457: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1515 - accuracy: 0.9625 - val_loss: 0.2885 - val_accuracy: 0.8136
Epoch 458/1000
2/2 [==============================] - ETA: 0s - loss: 0.1796 - accuracy: 0.9141
Epoch 458: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1796 - accuracy: 0.9141 - val_loss: 0.2875 - val_accuracy: 0.8136
Epoch 459/1000
2/2 [==============================] - ETA: 0s - loss: 0.1726 - accuracy: 0.9000
Epoch 459: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1726 - accuracy: 0.9000 - val_loss: 0.2845 - val_accuracy: 0.8136
Epoch 460/1000
2/2 [==============================] - ETA: 0s - loss: 0.1235 - accuracy: 0.9500
Epoch 460: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1235 - accuracy: 0.9500 - val_loss: 0.2820 - val_accuracy: 0.8136
Epoch 461/1000
2/2 [==============================] - ETA: 0s - loss: 0.1356 - accuracy: 0.9375
Epoch 461: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1356 - accuracy: 0.9375 - val_loss: 0.2795 - val_accuracy: 0.8136
Epoch 462/1000
2/2 [==============================] - ETA: 0s - loss: 0.1549 - accuracy: 0.9625
Epoch 462: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1549 - accuracy: 0.9625 - val_loss: 0.2786 - val_accuracy: 0.8136
Epoch 463/1000
2/2 [==============================] - ETA: 0s - loss: 0.1813 - accuracy: 0.9141
Epoch 463: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 936ms/step - loss: 0.1813 - accuracy: 0.9141 - val_loss: 0.2789 - val_accuracy: 0.8305
Epoch 464/1000
2/2 [==============================] - ETA: 0s - loss: 0.1662 - accuracy: 0.9375
Epoch 464: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1662 - accuracy: 0.9375 - val_loss: 0.2788 - val_accuracy: 0.8305
Epoch 465/1000
2/2 [==============================] - ETA: 0s - loss: 0.1256 - accuracy: 0.9750
Epoch 465: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 833ms/step - loss: 0.1256 - accuracy: 0.9750 - val_loss: 0.2806 - val_accuracy: 0.8305
Epoch 466/1000
2/2 [==============================] - ETA: 0s - loss: 0.1848 - accuracy: 0.9141
Epoch 466: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1848 - accuracy: 0.9141 - val_loss: 0.2832 - val_accuracy: 0.8136
Epoch 467/1000
2/2 [==============================] - ETA: 0s - loss: 0.1815 - accuracy: 0.9219
Epoch 467: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 932ms/step - loss: 0.1815 - accuracy: 0.9219 - val_loss: 0.2864 - val_accuracy: 0.8136
Epoch 468/1000
2/2 [==============================] - ETA: 0s - loss: 0.1715 - accuracy: 0.8906
Epoch 468: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1715 - accuracy: 0.8906 - val_loss: 0.2882 - val_accuracy: 0.8136
Epoch 469/1000
2/2 [==============================] - ETA: 0s - loss: 0.1390 - accuracy: 0.9375
Epoch 469: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 969ms/step - loss: 0.1390 - accuracy: 0.9375 - val_loss: 0.2885 - val_accuracy: 0.8136
Epoch 470/1000
2/2 [==============================] - ETA: 0s - loss: 0.1557 - accuracy: 0.9000
Epoch 470: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 808ms/step - loss: 0.1557 - accuracy: 0.9000 - val_loss: 0.2893 - val_accuracy: 0.8136
Epoch 471/1000
2/2 [==============================] - ETA: 0s - loss: 0.1416 - accuracy: 0.9375
Epoch 471: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1416 - accuracy: 0.9375 - val_loss: 0.2901 - val_accuracy: 0.8136
Epoch 472/1000
2/2 [==============================] - ETA: 0s - loss: 0.1847 - accuracy: 0.9000
Epoch 472: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 875ms/step - loss: 0.1847 - accuracy: 0.9000 - val_loss: 0.2897 - val_accuracy: 0.8136
Epoch 473/1000
2/2 [==============================] - ETA: 0s - loss: 0.1655 - accuracy: 0.9297
Epoch 473: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 953ms/step - loss: 0.1655 - accuracy: 0.9297 - val_loss: 0.2874 - val_accuracy: 0.8136
Epoch 474/1000
2/2 [==============================] - ETA: 0s - loss: 0.1800 - accuracy: 0.9141
Epoch 474: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 923ms/step - loss: 0.1800 - accuracy: 0.9141 - val_loss: 0.2858 - val_accuracy: 0.8136
Epoch 475/1000
2/2 [==============================] - ETA: 0s - loss: 0.1262 - accuracy: 0.9453
Epoch 475: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 993ms/step - loss: 0.1262 - accuracy: 0.9453 - val_loss: 0.2833 - val_accuracy: 0.8305
Epoch 476/1000
2/2 [==============================] - ETA: 0s - loss: 0.2006 - accuracy: 0.8906
Epoch 476: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 930ms/step - loss: 0.2006 - accuracy: 0.8906 - val_loss: 0.2805 - val_accuracy: 0.8305
Epoch 477/1000
2/2 [==============================] - ETA: 0s - loss: 0.1352 - accuracy: 0.9609
Epoch 477: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 925ms/step - loss: 0.1352 - accuracy: 0.9609 - val_loss: 0.2774 - val_accuracy: 0.8305
Epoch 478/1000
2/2 [==============================] - ETA: 0s - loss: 0.1754 - accuracy: 0.8906
Epoch 478: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1754 - accuracy: 0.8906 - val_loss: 0.2742 - val_accuracy: 0.8305
Epoch 479/1000
2/2 [==============================] - ETA: 0s - loss: 0.1439 - accuracy: 0.9531
Epoch 479: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 920ms/step - loss: 0.1439 - accuracy: 0.9531 - val_loss: 0.2717 - val_accuracy: 0.8305
Epoch 480/1000
2/2 [==============================] - ETA: 0s - loss: 0.1415 - accuracy: 0.9531
Epoch 480: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1415 - accuracy: 0.9531 - val_loss: 0.2691 - val_accuracy: 0.8305
Epoch 481/1000
2/2 [==============================] - ETA: 0s - loss: 0.1797 - accuracy: 0.9062
Epoch 481: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1797 - accuracy: 0.9062 - val_loss: 0.2675 - val_accuracy: 0.8305
Epoch 482/1000
2/2 [==============================] - ETA: 0s - loss: 0.1773 - accuracy: 0.9000
Epoch 482: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1773 - accuracy: 0.9000 - val_loss: 0.2663 - val_accuracy: 0.8305
Epoch 483/1000
2/2 [==============================] - ETA: 0s - loss: 0.1369 - accuracy: 0.9375
Epoch 483: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1369 - accuracy: 0.9375 - val_loss: 0.2664 - val_accuracy: 0.8305
Epoch 484/1000
2/2 [==============================] - ETA: 0s - loss: 0.1577 - accuracy: 0.9141
Epoch 484: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1577 - accuracy: 0.9141 - val_loss: 0.2667 - val_accuracy: 0.8305
Epoch 485/1000
2/2 [==============================] - ETA: 0s - loss: 0.1333 - accuracy: 0.9531
Epoch 485: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 956ms/step - loss: 0.1333 - accuracy: 0.9531 - val_loss: 0.2676 - val_accuracy: 0.8305
Epoch 486/1000
2/2 [==============================] - ETA: 0s - loss: 0.1250 - accuracy: 0.9625
Epoch 486: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 825ms/step - loss: 0.1250 - accuracy: 0.9625 - val_loss: 0.2692 - val_accuracy: 0.8305
Epoch 487/1000
2/2 [==============================] - ETA: 0s - loss: 0.1775 - accuracy: 0.8875
Epoch 487: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1775 - accuracy: 0.8875 - val_loss: 0.2708 - val_accuracy: 0.8305
Epoch 488/1000
2/2 [==============================] - ETA: 0s - loss: 0.1744 - accuracy: 0.9297
Epoch 488: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1744 - accuracy: 0.9297 - val_loss: 0.2726 - val_accuracy: 0.8305
Epoch 489/1000
2/2 [==============================] - ETA: 0s - loss: 0.1200 - accuracy: 0.9500
Epoch 489: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1200 - accuracy: 0.9500 - val_loss: 0.2729 - val_accuracy: 0.8305
Epoch 490/1000
2/2 [==============================] - ETA: 0s - loss: 0.1249 - accuracy: 0.9375
Epoch 490: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1249 - accuracy: 0.9375 - val_loss: 0.2736 - val_accuracy: 0.8305
Epoch 491/1000
2/2 [==============================] - ETA: 0s - loss: 0.1771 - accuracy: 0.9250
Epoch 491: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1771 - accuracy: 0.9250 - val_loss: 0.2729 - val_accuracy: 0.8305
Epoch 492/1000
2/2 [==============================] - ETA: 0s - loss: 0.1549 - accuracy: 0.9125
Epoch 492: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 818ms/step - loss: 0.1549 - accuracy: 0.9125 - val_loss: 0.2700 - val_accuracy: 0.8305
Epoch 493/1000
2/2 [==============================] - ETA: 0s - loss: 0.1681 - accuracy: 0.9141
Epoch 493: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1681 - accuracy: 0.9141 - val_loss: 0.2669 - val_accuracy: 0.8305
Epoch 494/1000
2/2 [==============================] - ETA: 0s - loss: 0.2009 - accuracy: 0.8750
Epoch 494: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 828ms/step - loss: 0.2009 - accuracy: 0.8750 - val_loss: 0.2638 - val_accuracy: 0.8475
Epoch 495/1000
2/2 [==============================] - ETA: 0s - loss: 0.1664 - accuracy: 0.9375
Epoch 495: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1664 - accuracy: 0.9375 - val_loss: 0.2620 - val_accuracy: 0.8475
Epoch 496/1000
2/2 [==============================] - ETA: 0s - loss: 0.2320 - accuracy: 0.8984
Epoch 496: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.2320 - accuracy: 0.8984 - val_loss: 0.2619 - val_accuracy: 0.8475
Epoch 497/1000
2/2 [==============================] - ETA: 0s - loss: 0.1626 - accuracy: 0.8906
Epoch 497: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1626 - accuracy: 0.8906 - val_loss: 0.2602 - val_accuracy: 0.8644
Epoch 498/1000
2/2 [==============================] - ETA: 0s - loss: 0.1545 - accuracy: 0.9531
Epoch 498: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 979ms/step - loss: 0.1545 - accuracy: 0.9531 - val_loss: 0.2595 - val_accuracy: 0.8644
Epoch 499/1000
2/2 [==============================] - ETA: 0s - loss: 0.1404 - accuracy: 0.9875
Epoch 499: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1404 - accuracy: 0.9875 - val_loss: 0.2609 - val_accuracy: 0.8644
Epoch 500/1000
2/2 [==============================] - ETA: 0s - loss: 0.1046 - accuracy: 0.9875
Epoch 500: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 843ms/step - loss: 0.1046 - accuracy: 0.9875 - val_loss: 0.2629 - val_accuracy: 0.8644
Epoch 501/1000
2/2 [==============================] - ETA: 0s - loss: 0.1495 - accuracy: 0.9531
Epoch 501: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 952ms/step - loss: 0.1495 - accuracy: 0.9531 - val_loss: 0.2650 - val_accuracy: 0.8644
Epoch 502/1000
2/2 [==============================] - ETA: 0s - loss: 0.1643 - accuracy: 0.9141
Epoch 502: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1643 - accuracy: 0.9141 - val_loss: 0.2670 - val_accuracy: 0.8644
Epoch 503/1000
2/2 [==============================] - ETA: 0s - loss: 0.1779 - accuracy: 0.9062
Epoch 503: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1779 - accuracy: 0.9062 - val_loss: 0.2686 - val_accuracy: 0.8644
Epoch 504/1000
2/2 [==============================] - ETA: 0s - loss: 0.1600 - accuracy: 0.9625
Epoch 504: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1600 - accuracy: 0.9625 - val_loss: 0.2689 - val_accuracy: 0.8644
Epoch 505/1000
2/2 [==============================] - ETA: 0s - loss: 0.1275 - accuracy: 0.9625
Epoch 505: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1275 - accuracy: 0.9625 - val_loss: 0.2680 - val_accuracy: 0.8644
Epoch 506/1000
2/2 [==============================] - ETA: 0s - loss: 0.1473 - accuracy: 0.9375
Epoch 506: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1473 - accuracy: 0.9375 - val_loss: 0.2678 - val_accuracy: 0.8644
Epoch 507/1000
2/2 [==============================] - ETA: 0s - loss: 0.1198 - accuracy: 0.9609
Epoch 507: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 968ms/step - loss: 0.1198 - accuracy: 0.9609 - val_loss: 0.2672 - val_accuracy: 0.8644
Epoch 508/1000
2/2 [==============================] - ETA: 0s - loss: 0.1290 - accuracy: 0.9625
Epoch 508: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 804ms/step - loss: 0.1290 - accuracy: 0.9625 - val_loss: 0.2670 - val_accuracy: 0.8644
Epoch 509/1000
2/2 [==============================] - ETA: 0s - loss: 0.1622 - accuracy: 0.9219
Epoch 509: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1622 - accuracy: 0.9219 - val_loss: 0.2672 - val_accuracy: 0.8644
Epoch 510/1000
2/2 [==============================] - ETA: 0s - loss: 0.1284 - accuracy: 0.9250
Epoch 510: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 835ms/step - loss: 0.1284 - accuracy: 0.9250 - val_loss: 0.2674 - val_accuracy: 0.8644
Epoch 511/1000
2/2 [==============================] - ETA: 0s - loss: 0.1641 - accuracy: 0.9375
Epoch 511: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1641 - accuracy: 0.9375 - val_loss: 0.2685 - val_accuracy: 0.8644
Epoch 512/1000
2/2 [==============================] - ETA: 0s - loss: 0.1069 - accuracy: 0.9609
Epoch 512: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1069 - accuracy: 0.9609 - val_loss: 0.2706 - val_accuracy: 0.8475
Epoch 513/1000
2/2 [==============================] - ETA: 0s - loss: 0.1871 - accuracy: 0.9250
Epoch 513: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 834ms/step - loss: 0.1871 - accuracy: 0.9250 - val_loss: 0.2733 - val_accuracy: 0.8305
Epoch 514/1000
2/2 [==============================] - ETA: 0s - loss: 0.1451 - accuracy: 0.9297
Epoch 514: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1451 - accuracy: 0.9297 - val_loss: 0.2743 - val_accuracy: 0.8305
Epoch 515/1000
2/2 [==============================] - ETA: 0s - loss: 0.1631 - accuracy: 0.9375
Epoch 515: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1631 - accuracy: 0.9375 - val_loss: 0.2753 - val_accuracy: 0.8305
Epoch 516/1000
2/2 [==============================] - ETA: 0s - loss: 0.1393 - accuracy: 0.9297
Epoch 516: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1393 - accuracy: 0.9297 - val_loss: 0.2769 - val_accuracy: 0.8305
Epoch 517/1000
2/2 [==============================] - ETA: 0s - loss: 0.1717 - accuracy: 0.9250
Epoch 517: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1717 - accuracy: 0.9250 - val_loss: 0.2786 - val_accuracy: 0.8305
Epoch 518/1000
2/2 [==============================] - ETA: 0s - loss: 0.2001 - accuracy: 0.9250
Epoch 518: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 809ms/step - loss: 0.2001 - accuracy: 0.9250 - val_loss: 0.2801 - val_accuracy: 0.8136
Epoch 519/1000
2/2 [==============================] - ETA: 0s - loss: 0.1469 - accuracy: 0.9062
Epoch 519: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 994ms/step - loss: 0.1469 - accuracy: 0.9062 - val_loss: 0.2800 - val_accuracy: 0.8136
Epoch 520/1000
2/2 [==============================] - ETA: 0s - loss: 0.1444 - accuracy: 0.9531
Epoch 520: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 929ms/step - loss: 0.1444 - accuracy: 0.9531 - val_loss: 0.2781 - val_accuracy: 0.8136
Epoch 521/1000
2/2 [==============================] - ETA: 0s - loss: 0.1783 - accuracy: 0.9219
Epoch 521: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1783 - accuracy: 0.9219 - val_loss: 0.2761 - val_accuracy: 0.8136
Epoch 522/1000
2/2 [==============================] - ETA: 0s - loss: 0.1481 - accuracy: 0.9625
Epoch 522: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 815ms/step - loss: 0.1481 - accuracy: 0.9625 - val_loss: 0.2747 - val_accuracy: 0.8136
Epoch 523/1000
2/2 [==============================] - ETA: 0s - loss: 0.1230 - accuracy: 0.9500
Epoch 523: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1230 - accuracy: 0.9500 - val_loss: 0.2744 - val_accuracy: 0.8136
Epoch 524/1000
2/2 [==============================] - ETA: 0s - loss: 0.1329 - accuracy: 0.9625
Epoch 524: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1329 - accuracy: 0.9625 - val_loss: 0.2744 - val_accuracy: 0.8136
Epoch 525/1000
2/2 [==============================] - ETA: 0s - loss: 0.1305 - accuracy: 0.9531
Epoch 525: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1305 - accuracy: 0.9531 - val_loss: 0.2744 - val_accuracy: 0.8136
Epoch 526/1000
2/2 [==============================] - ETA: 0s - loss: 0.0974 - accuracy: 0.9750
Epoch 526: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0974 - accuracy: 0.9750 - val_loss: 0.2743 - val_accuracy: 0.8136
Epoch 527/1000
2/2 [==============================] - ETA: 0s - loss: 0.2049 - accuracy: 0.9125
Epoch 527: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.2049 - accuracy: 0.9125 - val_loss: 0.2730 - val_accuracy: 0.8136
Epoch 528/1000
2/2 [==============================] - ETA: 0s - loss: 0.1441 - accuracy: 0.9297
Epoch 528: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 964ms/step - loss: 0.1441 - accuracy: 0.9297 - val_loss: 0.2722 - val_accuracy: 0.8136
Epoch 529/1000
2/2 [==============================] - ETA: 0s - loss: 0.1328 - accuracy: 0.9453
Epoch 529: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 973ms/step - loss: 0.1328 - accuracy: 0.9453 - val_loss: 0.2716 - val_accuracy: 0.8136
Epoch 530/1000
2/2 [==============================] - ETA: 0s - loss: 0.1522 - accuracy: 0.9375
Epoch 530: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1522 - accuracy: 0.9375 - val_loss: 0.2708 - val_accuracy: 0.8136
Epoch 531/1000
2/2 [==============================] - ETA: 0s - loss: 0.1479 - accuracy: 0.9531
Epoch 531: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1479 - accuracy: 0.9531 - val_loss: 0.2707 - val_accuracy: 0.8136
Epoch 532/1000
2/2 [==============================] - ETA: 0s - loss: 0.1405 - accuracy: 0.9375
Epoch 532: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 824ms/step - loss: 0.1405 - accuracy: 0.9375 - val_loss: 0.2708 - val_accuracy: 0.8136
Epoch 533/1000
2/2 [==============================] - ETA: 0s - loss: 0.1355 - accuracy: 0.9219
Epoch 533: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 929ms/step - loss: 0.1355 - accuracy: 0.9219 - val_loss: 0.2722 - val_accuracy: 0.8136
Epoch 534/1000
2/2 [==============================] - ETA: 0s - loss: 0.1524 - accuracy: 0.9375
Epoch 534: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 975ms/step - loss: 0.1524 - accuracy: 0.9375 - val_loss: 0.2752 - val_accuracy: 0.8136
Epoch 535/1000
2/2 [==============================] - ETA: 0s - loss: 0.1148 - accuracy: 0.9625
Epoch 535: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 825ms/step - loss: 0.1148 - accuracy: 0.9625 - val_loss: 0.2764 - val_accuracy: 0.8136
Epoch 536/1000
2/2 [==============================] - ETA: 0s - loss: 0.1230 - accuracy: 0.9500
Epoch 536: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 812ms/step - loss: 0.1230 - accuracy: 0.9500 - val_loss: 0.2759 - val_accuracy: 0.8136
Epoch 537/1000
2/2 [==============================] - ETA: 0s - loss: 0.1516 - accuracy: 0.9500
Epoch 537: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1516 - accuracy: 0.9500 - val_loss: 0.2749 - val_accuracy: 0.8136
Epoch 538/1000
2/2 [==============================] - ETA: 0s - loss: 0.1491 - accuracy: 0.9125
Epoch 538: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 835ms/step - loss: 0.1491 - accuracy: 0.9125 - val_loss: 0.2737 - val_accuracy: 0.8136
Epoch 539/1000
2/2 [==============================] - ETA: 0s - loss: 0.1335 - accuracy: 0.9766
Epoch 539: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 934ms/step - loss: 0.1335 - accuracy: 0.9766 - val_loss: 0.2722 - val_accuracy: 0.8305
Epoch 540/1000
2/2 [==============================] - ETA: 0s - loss: 0.1515 - accuracy: 0.9375
Epoch 540: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 836ms/step - loss: 0.1515 - accuracy: 0.9375 - val_loss: 0.2716 - val_accuracy: 0.8305
Epoch 541/1000
2/2 [==============================] - ETA: 0s - loss: 0.1613 - accuracy: 0.9125
Epoch 541: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 835ms/step - loss: 0.1613 - accuracy: 0.9125 - val_loss: 0.2709 - val_accuracy: 0.8305
Epoch 542/1000
2/2 [==============================] - ETA: 0s - loss: 0.1141 - accuracy: 0.9375
Epoch 542: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1141 - accuracy: 0.9375 - val_loss: 0.2692 - val_accuracy: 0.8305
Epoch 543/1000
2/2 [==============================] - ETA: 0s - loss: 0.1393 - accuracy: 0.9453
Epoch 543: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1393 - accuracy: 0.9453 - val_loss: 0.2681 - val_accuracy: 0.8305
Epoch 544/1000
2/2 [==============================] - ETA: 0s - loss: 0.1320 - accuracy: 0.9625
Epoch 544: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1320 - accuracy: 0.9625 - val_loss: 0.2639 - val_accuracy: 0.8305
Epoch 545/1000
2/2 [==============================] - ETA: 0s - loss: 0.1872 - accuracy: 0.9500
Epoch 545: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1872 - accuracy: 0.9500 - val_loss: 0.2605 - val_accuracy: 0.8475
Epoch 546/1000
2/2 [==============================] - ETA: 0s - loss: 0.1484 - accuracy: 0.9375
Epoch 546: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 867ms/step - loss: 0.1484 - accuracy: 0.9375 - val_loss: 0.2576 - val_accuracy: 0.8475
Epoch 547/1000
2/2 [==============================] - ETA: 0s - loss: 0.1332 - accuracy: 0.9250
Epoch 547: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1332 - accuracy: 0.9250 - val_loss: 0.2548 - val_accuracy: 0.8475
Epoch 548/1000
2/2 [==============================] - ETA: 0s - loss: 0.1152 - accuracy: 0.9375
Epoch 548: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 863ms/step - loss: 0.1152 - accuracy: 0.9375 - val_loss: 0.2531 - val_accuracy: 0.8475
Epoch 549/1000
2/2 [==============================] - ETA: 0s - loss: 0.1229 - accuracy: 0.9375
Epoch 549: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 816ms/step - loss: 0.1229 - accuracy: 0.9375 - val_loss: 0.2502 - val_accuracy: 0.8475
Epoch 550/1000
2/2 [==============================] - ETA: 0s - loss: 0.1275 - accuracy: 0.9375
Epoch 550: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 970ms/step - loss: 0.1275 - accuracy: 0.9375 - val_loss: 0.2477 - val_accuracy: 0.8475
Epoch 551/1000
2/2 [==============================] - ETA: 0s - loss: 0.1139 - accuracy: 0.9609
Epoch 551: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1139 - accuracy: 0.9609 - val_loss: 0.2460 - val_accuracy: 0.8475
Epoch 552/1000
2/2 [==============================] - ETA: 0s - loss: 0.1195 - accuracy: 0.9625
Epoch 552: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 843ms/step - loss: 0.1195 - accuracy: 0.9625 - val_loss: 0.2457 - val_accuracy: 0.8475
Epoch 553/1000
2/2 [==============================] - ETA: 0s - loss: 0.1418 - accuracy: 0.9609
Epoch 553: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1418 - accuracy: 0.9609 - val_loss: 0.2463 - val_accuracy: 0.8644
Epoch 554/1000
2/2 [==============================] - ETA: 0s - loss: 0.1361 - accuracy: 0.9531
Epoch 554: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 928ms/step - loss: 0.1361 - accuracy: 0.9531 - val_loss: 0.2481 - val_accuracy: 0.8644
Epoch 555/1000
2/2 [==============================] - ETA: 0s - loss: 0.1261 - accuracy: 0.9609
Epoch 555: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1261 - accuracy: 0.9609 - val_loss: 0.2497 - val_accuracy: 0.8644
Epoch 556/1000
2/2 [==============================] - ETA: 0s - loss: 0.1351 - accuracy: 0.9375
Epoch 556: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1351 - accuracy: 0.9375 - val_loss: 0.2502 - val_accuracy: 0.8644
Epoch 557/1000
2/2 [==============================] - ETA: 0s - loss: 0.1348 - accuracy: 0.9609
Epoch 557: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 979ms/step - loss: 0.1348 - accuracy: 0.9609 - val_loss: 0.2511 - val_accuracy: 0.8644
Epoch 558/1000
2/2 [==============================] - ETA: 0s - loss: 0.1423 - accuracy: 0.9453
Epoch 558: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 966ms/step - loss: 0.1423 - accuracy: 0.9453 - val_loss: 0.2523 - val_accuracy: 0.8475
Epoch 559/1000
2/2 [==============================] - ETA: 0s - loss: 0.1183 - accuracy: 0.9500
Epoch 559: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1183 - accuracy: 0.9500 - val_loss: 0.2542 - val_accuracy: 0.8475
Epoch 560/1000
2/2 [==============================] - ETA: 0s - loss: 0.1366 - accuracy: 0.9375
Epoch 560: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1366 - accuracy: 0.9375 - val_loss: 0.2565 - val_accuracy: 0.8475
Epoch 561/1000
2/2 [==============================] - ETA: 0s - loss: 0.1263 - accuracy: 0.9453
Epoch 561: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1263 - accuracy: 0.9453 - val_loss: 0.2591 - val_accuracy: 0.8475
Epoch 562/1000
2/2 [==============================] - ETA: 0s - loss: 0.1715 - accuracy: 0.9141
Epoch 562: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1715 - accuracy: 0.9141 - val_loss: 0.2615 - val_accuracy: 0.8475
Epoch 563/1000
2/2 [==============================] - ETA: 0s - loss: 0.1418 - accuracy: 0.9250
Epoch 563: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1418 - accuracy: 0.9250 - val_loss: 0.2651 - val_accuracy: 0.8475
Epoch 564/1000
2/2 [==============================] - ETA: 0s - loss: 0.1290 - accuracy: 0.9625
Epoch 564: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 811ms/step - loss: 0.1290 - accuracy: 0.9625 - val_loss: 0.2691 - val_accuracy: 0.8305
Epoch 565/1000
2/2 [==============================] - ETA: 0s - loss: 0.1817 - accuracy: 0.9375
Epoch 565: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1817 - accuracy: 0.9375 - val_loss: 0.2708 - val_accuracy: 0.8305
Epoch 566/1000
2/2 [==============================] - ETA: 0s - loss: 0.1019 - accuracy: 0.9500
Epoch 566: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1019 - accuracy: 0.9500 - val_loss: 0.2701 - val_accuracy: 0.8305
Epoch 567/1000
2/2 [==============================] - ETA: 0s - loss: 0.1623 - accuracy: 0.9125
Epoch 567: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1623 - accuracy: 0.9125 - val_loss: 0.2697 - val_accuracy: 0.8305
Epoch 568/1000
2/2 [==============================] - ETA: 0s - loss: 0.1237 - accuracy: 0.9250
Epoch 568: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 837ms/step - loss: 0.1237 - accuracy: 0.9250 - val_loss: 0.2684 - val_accuracy: 0.8475
Epoch 569/1000
2/2 [==============================] - ETA: 0s - loss: 0.1747 - accuracy: 0.8984
Epoch 569: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 987ms/step - loss: 0.1747 - accuracy: 0.8984 - val_loss: 0.2667 - val_accuracy: 0.8475
Epoch 570/1000
2/2 [==============================] - ETA: 0s - loss: 0.1495 - accuracy: 0.9375
Epoch 570: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1495 - accuracy: 0.9375 - val_loss: 0.2644 - val_accuracy: 0.8475
Epoch 571/1000
2/2 [==============================] - ETA: 0s - loss: 0.1420 - accuracy: 0.9453
Epoch 571: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1420 - accuracy: 0.9453 - val_loss: 0.2626 - val_accuracy: 0.8475
Epoch 572/1000
2/2 [==============================] - ETA: 0s - loss: 0.1442 - accuracy: 0.9250
Epoch 572: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 863ms/step - loss: 0.1442 - accuracy: 0.9250 - val_loss: 0.2603 - val_accuracy: 0.8475
Epoch 573/1000
2/2 [==============================] - ETA: 0s - loss: 0.1683 - accuracy: 0.9141
Epoch 573: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 970ms/step - loss: 0.1683 - accuracy: 0.9141 - val_loss: 0.2589 - val_accuracy: 0.8475
Epoch 574/1000
2/2 [==============================] - ETA: 0s - loss: 0.1001 - accuracy: 0.9875
Epoch 574: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1001 - accuracy: 0.9875 - val_loss: 0.2574 - val_accuracy: 0.8475
Epoch 575/1000
2/2 [==============================] - ETA: 0s - loss: 0.1083 - accuracy: 0.9766
Epoch 575: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 930ms/step - loss: 0.1083 - accuracy: 0.9766 - val_loss: 0.2565 - val_accuracy: 0.8475
Epoch 576/1000
2/2 [==============================] - ETA: 0s - loss: 0.1630 - accuracy: 0.9125
Epoch 576: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 993ms/step - loss: 0.1630 - accuracy: 0.9125 - val_loss: 0.2553 - val_accuracy: 0.8305
Epoch 577/1000
2/2 [==============================] - ETA: 0s - loss: 0.1247 - accuracy: 0.9688
Epoch 577: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 954ms/step - loss: 0.1247 - accuracy: 0.9688 - val_loss: 0.2550 - val_accuracy: 0.8305
Epoch 578/1000
2/2 [==============================] - ETA: 0s - loss: 0.1639 - accuracy: 0.9297
Epoch 578: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1639 - accuracy: 0.9297 - val_loss: 0.2545 - val_accuracy: 0.8305
Epoch 579/1000
2/2 [==============================] - ETA: 0s - loss: 0.1569 - accuracy: 0.9500
Epoch 579: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1569 - accuracy: 0.9500 - val_loss: 0.2547 - val_accuracy: 0.8305
Epoch 580/1000
2/2 [==============================] - ETA: 0s - loss: 0.1216 - accuracy: 0.9531
Epoch 580: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 973ms/step - loss: 0.1216 - accuracy: 0.9531 - val_loss: 0.2551 - val_accuracy: 0.8305
Epoch 581/1000
2/2 [==============================] - ETA: 0s - loss: 0.1174 - accuracy: 0.9625
Epoch 581: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 823ms/step - loss: 0.1174 - accuracy: 0.9625 - val_loss: 0.2562 - val_accuracy: 0.8305
Epoch 582/1000
2/2 [==============================] - ETA: 0s - loss: 0.1507 - accuracy: 0.9125
Epoch 582: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 824ms/step - loss: 0.1507 - accuracy: 0.9125 - val_loss: 0.2584 - val_accuracy: 0.8305
Epoch 583/1000
2/2 [==============================] - ETA: 0s - loss: 0.1742 - accuracy: 0.9125
Epoch 583: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1742 - accuracy: 0.9125 - val_loss: 0.2610 - val_accuracy: 0.8305
Epoch 584/1000
2/2 [==============================] - ETA: 0s - loss: 0.1347 - accuracy: 0.9500
Epoch 584: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 832ms/step - loss: 0.1347 - accuracy: 0.9500 - val_loss: 0.2647 - val_accuracy: 0.8136
Epoch 585/1000
2/2 [==============================] - ETA: 0s - loss: 0.1067 - accuracy: 0.9625
Epoch 585: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 813ms/step - loss: 0.1067 - accuracy: 0.9625 - val_loss: 0.2673 - val_accuracy: 0.8136
Epoch 586/1000
2/2 [==============================] - ETA: 0s - loss: 0.1478 - accuracy: 0.9375
Epoch 586: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1478 - accuracy: 0.9375 - val_loss: 0.2684 - val_accuracy: 0.8136
Epoch 587/1000
2/2 [==============================] - ETA: 0s - loss: 0.1327 - accuracy: 0.9375
Epoch 587: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1327 - accuracy: 0.9375 - val_loss: 0.2703 - val_accuracy: 0.8136
Epoch 588/1000
2/2 [==============================] - ETA: 0s - loss: 0.1022 - accuracy: 0.9844
Epoch 588: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 926ms/step - loss: 0.1022 - accuracy: 0.9844 - val_loss: 0.2727 - val_accuracy: 0.8136
Epoch 589/1000
2/2 [==============================] - ETA: 0s - loss: 0.2192 - accuracy: 0.9250
Epoch 589: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 815ms/step - loss: 0.2192 - accuracy: 0.9250 - val_loss: 0.2742 - val_accuracy: 0.8136
Epoch 590/1000
2/2 [==============================] - ETA: 0s - loss: 0.1731 - accuracy: 0.9000
Epoch 590: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1731 - accuracy: 0.9000 - val_loss: 0.2751 - val_accuracy: 0.8136
Epoch 591/1000
2/2 [==============================] - ETA: 0s - loss: 0.1368 - accuracy: 0.9453
Epoch 591: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1368 - accuracy: 0.9453 - val_loss: 0.2766 - val_accuracy: 0.8136
Epoch 592/1000
2/2 [==============================] - ETA: 0s - loss: 0.1619 - accuracy: 0.9531
Epoch 592: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1619 - accuracy: 0.9531 - val_loss: 0.2789 - val_accuracy: 0.8136
Epoch 593/1000
2/2 [==============================] - ETA: 0s - loss: 0.1565 - accuracy: 0.9453
Epoch 593: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1565 - accuracy: 0.9453 - val_loss: 0.2819 - val_accuracy: 0.8136
Epoch 594/1000
2/2 [==============================] - ETA: 0s - loss: 0.1473 - accuracy: 0.9375
Epoch 594: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1473 - accuracy: 0.9375 - val_loss: 0.2856 - val_accuracy: 0.8136
Epoch 595/1000
2/2 [==============================] - ETA: 0s - loss: 0.1418 - accuracy: 0.9500
Epoch 595: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 844ms/step - loss: 0.1418 - accuracy: 0.9500 - val_loss: 0.2865 - val_accuracy: 0.8136
Epoch 596/1000
2/2 [==============================] - ETA: 0s - loss: 0.1448 - accuracy: 0.9375
Epoch 596: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 965ms/step - loss: 0.1448 - accuracy: 0.9375 - val_loss: 0.2876 - val_accuracy: 0.8136
Epoch 597/1000
2/2 [==============================] - ETA: 0s - loss: 0.1282 - accuracy: 0.9531
Epoch 597: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1282 - accuracy: 0.9531 - val_loss: 0.2887 - val_accuracy: 0.8136
Epoch 598/1000
2/2 [==============================] - ETA: 0s - loss: 0.1232 - accuracy: 0.9625
Epoch 598: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1232 - accuracy: 0.9625 - val_loss: 0.2871 - val_accuracy: 0.8136
Epoch 599/1000
2/2 [==============================] - ETA: 0s - loss: 0.1416 - accuracy: 0.9297
Epoch 599: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 940ms/step - loss: 0.1416 - accuracy: 0.9297 - val_loss: 0.2858 - val_accuracy: 0.8136
Epoch 600/1000
2/2 [==============================] - ETA: 0s - loss: 0.1402 - accuracy: 0.9219
Epoch 600: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1402 - accuracy: 0.9219 - val_loss: 0.2840 - val_accuracy: 0.8136
Epoch 601/1000
2/2 [==============================] - ETA: 0s - loss: 0.1639 - accuracy: 0.9125
Epoch 601: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 848ms/step - loss: 0.1639 - accuracy: 0.9125 - val_loss: 0.2813 - val_accuracy: 0.8305
Epoch 602/1000
2/2 [==============================] - ETA: 0s - loss: 0.1876 - accuracy: 0.9250
Epoch 602: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 820ms/step - loss: 0.1876 - accuracy: 0.9250 - val_loss: 0.2773 - val_accuracy: 0.8305
Epoch 603/1000
2/2 [==============================] - ETA: 0s - loss: 0.1317 - accuracy: 0.9500
Epoch 603: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 826ms/step - loss: 0.1317 - accuracy: 0.9500 - val_loss: 0.2740 - val_accuracy: 0.8136
Epoch 604/1000
2/2 [==============================] - ETA: 0s - loss: 0.1224 - accuracy: 0.9500
Epoch 604: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1224 - accuracy: 0.9500 - val_loss: 0.2705 - val_accuracy: 0.8136
Epoch 605/1000
2/2 [==============================] - ETA: 0s - loss: 0.1412 - accuracy: 0.9375
Epoch 605: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1412 - accuracy: 0.9375 - val_loss: 0.2674 - val_accuracy: 0.8136
Epoch 606/1000
2/2 [==============================] - ETA: 0s - loss: 0.1069 - accuracy: 0.9750
Epoch 606: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1069 - accuracy: 0.9750 - val_loss: 0.2641 - val_accuracy: 0.8305
Epoch 607/1000
2/2 [==============================] - ETA: 0s - loss: 0.0904 - accuracy: 0.9750
Epoch 607: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 811ms/step - loss: 0.0904 - accuracy: 0.9750 - val_loss: 0.2630 - val_accuracy: 0.8305
Epoch 608/1000
2/2 [==============================] - ETA: 0s - loss: 0.1305 - accuracy: 0.9375
Epoch 608: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1305 - accuracy: 0.9375 - val_loss: 0.2647 - val_accuracy: 0.8305
Epoch 609/1000
2/2 [==============================] - ETA: 0s - loss: 0.1477 - accuracy: 0.9375
Epoch 609: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 831ms/step - loss: 0.1477 - accuracy: 0.9375 - val_loss: 0.2663 - val_accuracy: 0.8305
Epoch 610/1000
2/2 [==============================] - ETA: 0s - loss: 0.0939 - accuracy: 1.0000
Epoch 610: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0939 - accuracy: 1.0000 - val_loss: 0.2680 - val_accuracy: 0.8475
Epoch 611/1000
2/2 [==============================] - ETA: 0s - loss: 0.0889 - accuracy: 0.9875
Epoch 611: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 845ms/step - loss: 0.0889 - accuracy: 0.9875 - val_loss: 0.2703 - val_accuracy: 0.8305
Epoch 612/1000
2/2 [==============================] - ETA: 0s - loss: 0.1134 - accuracy: 0.9609
Epoch 612: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1134 - accuracy: 0.9609 - val_loss: 0.2725 - val_accuracy: 0.8305
Epoch 613/1000
2/2 [==============================] - ETA: 0s - loss: 0.1093 - accuracy: 0.9688
Epoch 613: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 932ms/step - loss: 0.1093 - accuracy: 0.9688 - val_loss: 0.2741 - val_accuracy: 0.8305
Epoch 614/1000
2/2 [==============================] - ETA: 0s - loss: 0.1112 - accuracy: 0.9688
Epoch 614: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1112 - accuracy: 0.9688 - val_loss: 0.2750 - val_accuracy: 0.8305
Epoch 615/1000
2/2 [==============================] - ETA: 0s - loss: 0.1013 - accuracy: 1.0000
Epoch 615: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1013 - accuracy: 1.0000 - val_loss: 0.2758 - val_accuracy: 0.8305
Epoch 616/1000
2/2 [==============================] - ETA: 0s - loss: 0.1483 - accuracy: 0.9141
Epoch 616: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1483 - accuracy: 0.9141 - val_loss: 0.2760 - val_accuracy: 0.8305
Epoch 617/1000
2/2 [==============================] - ETA: 0s - loss: 0.1175 - accuracy: 0.9625
Epoch 617: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1175 - accuracy: 0.9625 - val_loss: 0.2762 - val_accuracy: 0.8305
Epoch 618/1000
2/2 [==============================] - ETA: 0s - loss: 0.1037 - accuracy: 0.9688
Epoch 618: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 955ms/step - loss: 0.1037 - accuracy: 0.9688 - val_loss: 0.2767 - val_accuracy: 0.8305
Epoch 619/1000
2/2 [==============================] - ETA: 0s - loss: 0.1226 - accuracy: 0.9500
Epoch 619: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 820ms/step - loss: 0.1226 - accuracy: 0.9500 - val_loss: 0.2775 - val_accuracy: 0.8305
Epoch 620/1000
2/2 [==============================] - ETA: 0s - loss: 0.1093 - accuracy: 0.9625
Epoch 620: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 820ms/step - loss: 0.1093 - accuracy: 0.9625 - val_loss: 0.2780 - val_accuracy: 0.8305
Epoch 621/1000
2/2 [==============================] - ETA: 0s - loss: 0.1217 - accuracy: 0.9453
Epoch 621: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 923ms/step - loss: 0.1217 - accuracy: 0.9453 - val_loss: 0.2780 - val_accuracy: 0.8475
Epoch 622/1000
2/2 [==============================] - ETA: 0s - loss: 0.1332 - accuracy: 0.9688
Epoch 622: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 958ms/step - loss: 0.1332 - accuracy: 0.9688 - val_loss: 0.2768 - val_accuracy: 0.8475
Epoch 623/1000
2/2 [==============================] - ETA: 0s - loss: 0.1901 - accuracy: 0.8750
Epoch 623: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 874ms/step - loss: 0.1901 - accuracy: 0.8750 - val_loss: 0.2755 - val_accuracy: 0.8475
Epoch 624/1000
2/2 [==============================] - ETA: 0s - loss: 0.1137 - accuracy: 0.9531
Epoch 624: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 931ms/step - loss: 0.1137 - accuracy: 0.9531 - val_loss: 0.2747 - val_accuracy: 0.8475
Epoch 625/1000
2/2 [==============================] - ETA: 0s - loss: 0.1145 - accuracy: 0.9453
Epoch 625: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 982ms/step - loss: 0.1145 - accuracy: 0.9453 - val_loss: 0.2742 - val_accuracy: 0.8475
Epoch 626/1000
2/2 [==============================] - ETA: 0s - loss: 0.1495 - accuracy: 0.9453
Epoch 626: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 985ms/step - loss: 0.1495 - accuracy: 0.9453 - val_loss: 0.2736 - val_accuracy: 0.8475
Epoch 627/1000
2/2 [==============================] - ETA: 0s - loss: 0.0794 - accuracy: 0.9875
Epoch 627: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 817ms/step - loss: 0.0794 - accuracy: 0.9875 - val_loss: 0.2719 - val_accuracy: 0.8475
Epoch 628/1000
2/2 [==============================] - ETA: 0s - loss: 0.1697 - accuracy: 0.9141
Epoch 628: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 968ms/step - loss: 0.1697 - accuracy: 0.9141 - val_loss: 0.2718 - val_accuracy: 0.8475
Epoch 629/1000
2/2 [==============================] - ETA: 0s - loss: 0.1177 - accuracy: 0.9297
Epoch 629: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1177 - accuracy: 0.9297 - val_loss: 0.2714 - val_accuracy: 0.8475
Epoch 630/1000
2/2 [==============================] - ETA: 0s - loss: 0.1289 - accuracy: 0.9453
Epoch 630: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 930ms/step - loss: 0.1289 - accuracy: 0.9453 - val_loss: 0.2695 - val_accuracy: 0.8475
Epoch 631/1000
2/2 [==============================] - ETA: 0s - loss: 0.1265 - accuracy: 0.9625
Epoch 631: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1265 - accuracy: 0.9625 - val_loss: 0.2698 - val_accuracy: 0.8475
Epoch 632/1000
2/2 [==============================] - ETA: 0s - loss: 0.1210 - accuracy: 0.9375
Epoch 632: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1210 - accuracy: 0.9375 - val_loss: 0.2694 - val_accuracy: 0.8475
Epoch 633/1000
2/2 [==============================] - ETA: 0s - loss: 0.1212 - accuracy: 0.9531
Epoch 633: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 910ms/step - loss: 0.1212 - accuracy: 0.9531 - val_loss: 0.2685 - val_accuracy: 0.8475
Epoch 634/1000
2/2 [==============================] - ETA: 0s - loss: 0.0945 - accuracy: 0.9625
Epoch 634: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 828ms/step - loss: 0.0945 - accuracy: 0.9625 - val_loss: 0.2682 - val_accuracy: 0.8475
Epoch 635/1000
2/2 [==============================] - ETA: 0s - loss: 0.1332 - accuracy: 0.9453
Epoch 635: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1332 - accuracy: 0.9453 - val_loss: 0.2689 - val_accuracy: 0.8305
Epoch 636/1000
2/2 [==============================] - ETA: 0s - loss: 0.1162 - accuracy: 0.9297
Epoch 636: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1162 - accuracy: 0.9297 - val_loss: 0.2700 - val_accuracy: 0.8305
Epoch 637/1000
2/2 [==============================] - ETA: 0s - loss: 0.1188 - accuracy: 0.9453
Epoch 637: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 944ms/step - loss: 0.1188 - accuracy: 0.9453 - val_loss: 0.2703 - val_accuracy: 0.8305
Epoch 638/1000
2/2 [==============================] - ETA: 0s - loss: 0.1679 - accuracy: 0.9125
Epoch 638: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 835ms/step - loss: 0.1679 - accuracy: 0.9125 - val_loss: 0.2692 - val_accuracy: 0.8305
Epoch 639/1000
2/2 [==============================] - ETA: 0s - loss: 0.0977 - accuracy: 0.9625
Epoch 639: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 837ms/step - loss: 0.0977 - accuracy: 0.9625 - val_loss: 0.2677 - val_accuracy: 0.8305
Epoch 640/1000
2/2 [==============================] - ETA: 0s - loss: 0.0780 - accuracy: 0.9844
Epoch 640: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 934ms/step - loss: 0.0780 - accuracy: 0.9844 - val_loss: 0.2665 - val_accuracy: 0.8305
Epoch 641/1000
2/2 [==============================] - ETA: 0s - loss: 0.0954 - accuracy: 0.9625
Epoch 641: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 809ms/step - loss: 0.0954 - accuracy: 0.9625 - val_loss: 0.2658 - val_accuracy: 0.8305
Epoch 642/1000
2/2 [==============================] - ETA: 0s - loss: 0.1260 - accuracy: 0.9531
Epoch 642: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1260 - accuracy: 0.9531 - val_loss: 0.2659 - val_accuracy: 0.8305
Epoch 643/1000
2/2 [==============================] - ETA: 0s - loss: 0.1252 - accuracy: 0.9453
Epoch 643: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1252 - accuracy: 0.9453 - val_loss: 0.2662 - val_accuracy: 0.8305
Epoch 644/1000
2/2 [==============================] - ETA: 0s - loss: 0.1139 - accuracy: 0.9625
Epoch 644: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 820ms/step - loss: 0.1139 - accuracy: 0.9625 - val_loss: 0.2659 - val_accuracy: 0.8475
Epoch 645/1000
2/2 [==============================] - ETA: 0s - loss: 0.1121 - accuracy: 0.9531
Epoch 645: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1121 - accuracy: 0.9531 - val_loss: 0.2654 - val_accuracy: 0.8475
Epoch 646/1000
2/2 [==============================] - ETA: 0s - loss: 0.1068 - accuracy: 0.9688
Epoch 646: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1068 - accuracy: 0.9688 - val_loss: 0.2652 - val_accuracy: 0.8475
Epoch 647/1000
2/2 [==============================] - ETA: 0s - loss: 0.1136 - accuracy: 0.9625
Epoch 647: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1136 - accuracy: 0.9625 - val_loss: 0.2650 - val_accuracy: 0.8475
Epoch 648/1000
2/2 [==============================] - ETA: 0s - loss: 0.1084 - accuracy: 0.9688
Epoch 648: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1084 - accuracy: 0.9688 - val_loss: 0.2641 - val_accuracy: 0.8475
Epoch 649/1000
2/2 [==============================] - ETA: 0s - loss: 0.1123 - accuracy: 0.9531
Epoch 649: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 999ms/step - loss: 0.1123 - accuracy: 0.9531 - val_loss: 0.2637 - val_accuracy: 0.8475
Epoch 650/1000
2/2 [==============================] - ETA: 0s - loss: 0.1562 - accuracy: 0.9375
Epoch 650: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1562 - accuracy: 0.9375 - val_loss: 0.2633 - val_accuracy: 0.8475
Epoch 651/1000
2/2 [==============================] - ETA: 0s - loss: 0.1610 - accuracy: 0.9375
Epoch 651: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 804ms/step - loss: 0.1610 - accuracy: 0.9375 - val_loss: 0.2635 - val_accuracy: 0.8475
Epoch 652/1000
2/2 [==============================] - ETA: 0s - loss: 0.1656 - accuracy: 0.9141
Epoch 652: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1656 - accuracy: 0.9141 - val_loss: 0.2640 - val_accuracy: 0.8475
Epoch 653/1000
2/2 [==============================] - ETA: 0s - loss: 0.1222 - accuracy: 0.9500
Epoch 653: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 822ms/step - loss: 0.1222 - accuracy: 0.9500 - val_loss: 0.2651 - val_accuracy: 0.8475
Epoch 654/1000
2/2 [==============================] - ETA: 0s - loss: 0.1006 - accuracy: 0.9766
Epoch 654: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1006 - accuracy: 0.9766 - val_loss: 0.2669 - val_accuracy: 0.8475
Epoch 655/1000
2/2 [==============================] - ETA: 0s - loss: 0.1395 - accuracy: 0.9250
Epoch 655: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 826ms/step - loss: 0.1395 - accuracy: 0.9250 - val_loss: 0.2695 - val_accuracy: 0.8475
Epoch 656/1000
2/2 [==============================] - ETA: 0s - loss: 0.1042 - accuracy: 0.9766
Epoch 656: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1042 - accuracy: 0.9766 - val_loss: 0.2724 - val_accuracy: 0.8475
Epoch 657/1000
2/2 [==============================] - ETA: 0s - loss: 0.1471 - accuracy: 0.9125
Epoch 657: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1471 - accuracy: 0.9125 - val_loss: 0.2752 - val_accuracy: 0.8475
Epoch 658/1000
2/2 [==============================] - ETA: 0s - loss: 0.1069 - accuracy: 0.9531
Epoch 658: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 935ms/step - loss: 0.1069 - accuracy: 0.9531 - val_loss: 0.2782 - val_accuracy: 0.8475
Epoch 659/1000
2/2 [==============================] - ETA: 0s - loss: 0.0970 - accuracy: 0.9766
Epoch 659: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0970 - accuracy: 0.9766 - val_loss: 0.2803 - val_accuracy: 0.8475
Epoch 660/1000
2/2 [==============================] - ETA: 0s - loss: 0.1135 - accuracy: 0.9609
Epoch 660: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1135 - accuracy: 0.9609 - val_loss: 0.2815 - val_accuracy: 0.8305
Epoch 661/1000
2/2 [==============================] - ETA: 0s - loss: 0.0622 - accuracy: 0.9875
Epoch 661: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 801ms/step - loss: 0.0622 - accuracy: 0.9875 - val_loss: 0.2827 - val_accuracy: 0.8305
Epoch 662/1000
2/2 [==============================] - ETA: 0s - loss: 0.1074 - accuracy: 0.9625
Epoch 662: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 812ms/step - loss: 0.1074 - accuracy: 0.9625 - val_loss: 0.2826 - val_accuracy: 0.8305
Epoch 663/1000
2/2 [==============================] - ETA: 0s - loss: 0.1000 - accuracy: 0.9844
Epoch 663: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1000 - accuracy: 0.9844 - val_loss: 0.2818 - val_accuracy: 0.8475
Epoch 664/1000
2/2 [==============================] - ETA: 0s - loss: 0.0919 - accuracy: 0.9500
Epoch 664: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 840ms/step - loss: 0.0919 - accuracy: 0.9500 - val_loss: 0.2819 - val_accuracy: 0.8475
Epoch 665/1000
2/2 [==============================] - ETA: 0s - loss: 0.1268 - accuracy: 0.9375
Epoch 665: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1268 - accuracy: 0.9375 - val_loss: 0.2829 - val_accuracy: 0.8475
Epoch 666/1000
2/2 [==============================] - ETA: 0s - loss: 0.1491 - accuracy: 0.9250
Epoch 666: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1491 - accuracy: 0.9250 - val_loss: 0.2811 - val_accuracy: 0.8475
Epoch 667/1000
2/2 [==============================] - ETA: 0s - loss: 0.1190 - accuracy: 0.9500
Epoch 667: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 820ms/step - loss: 0.1190 - accuracy: 0.9500 - val_loss: 0.2784 - val_accuracy: 0.8475
Epoch 668/1000
2/2 [==============================] - ETA: 0s - loss: 0.0955 - accuracy: 0.9688
Epoch 668: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0955 - accuracy: 0.9688 - val_loss: 0.2763 - val_accuracy: 0.8475
Epoch 669/1000
2/2 [==============================] - ETA: 0s - loss: 0.1251 - accuracy: 0.9531
Epoch 669: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1251 - accuracy: 0.9531 - val_loss: 0.2759 - val_accuracy: 0.8475
Epoch 670/1000
2/2 [==============================] - ETA: 0s - loss: 0.1130 - accuracy: 0.9500
Epoch 670: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 821ms/step - loss: 0.1130 - accuracy: 0.9500 - val_loss: 0.2762 - val_accuracy: 0.8475
Epoch 671/1000
2/2 [==============================] - ETA: 0s - loss: 0.1206 - accuracy: 0.9375
Epoch 671: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1206 - accuracy: 0.9375 - val_loss: 0.2766 - val_accuracy: 0.8305
Epoch 672/1000
2/2 [==============================] - ETA: 0s - loss: 0.1287 - accuracy: 0.9453
Epoch 672: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1287 - accuracy: 0.9453 - val_loss: 0.2768 - val_accuracy: 0.8305
Epoch 673/1000
2/2 [==============================] - ETA: 0s - loss: 0.1517 - accuracy: 0.9250
Epoch 673: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 818ms/step - loss: 0.1517 - accuracy: 0.9250 - val_loss: 0.2769 - val_accuracy: 0.8305
Epoch 674/1000
2/2 [==============================] - ETA: 0s - loss: 0.1057 - accuracy: 0.9609
Epoch 674: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1057 - accuracy: 0.9609 - val_loss: 0.2767 - val_accuracy: 0.8305
Epoch 675/1000
2/2 [==============================] - ETA: 0s - loss: 0.1428 - accuracy: 0.9375
Epoch 675: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 834ms/step - loss: 0.1428 - accuracy: 0.9375 - val_loss: 0.2772 - val_accuracy: 0.8305
Epoch 676/1000
2/2 [==============================] - ETA: 0s - loss: 0.1095 - accuracy: 0.9625
Epoch 676: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1095 - accuracy: 0.9625 - val_loss: 0.2795 - val_accuracy: 0.8305
Epoch 677/1000
2/2 [==============================] - ETA: 0s - loss: 0.1420 - accuracy: 0.9375
Epoch 677: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1420 - accuracy: 0.9375 - val_loss: 0.2809 - val_accuracy: 0.8305
Epoch 678/1000
2/2 [==============================] - ETA: 0s - loss: 0.1261 - accuracy: 0.9141
Epoch 678: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1261 - accuracy: 0.9141 - val_loss: 0.2811 - val_accuracy: 0.8305
Epoch 679/1000
2/2 [==============================] - ETA: 0s - loss: 0.1210 - accuracy: 0.9625
Epoch 679: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 808ms/step - loss: 0.1210 - accuracy: 0.9625 - val_loss: 0.2805 - val_accuracy: 0.8305
Epoch 680/1000
2/2 [==============================] - ETA: 0s - loss: 0.1199 - accuracy: 0.9250
Epoch 680: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 826ms/step - loss: 0.1199 - accuracy: 0.9250 - val_loss: 0.2789 - val_accuracy: 0.8305
Epoch 681/1000
2/2 [==============================] - ETA: 0s - loss: 0.1262 - accuracy: 0.9688
Epoch 681: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 938ms/step - loss: 0.1262 - accuracy: 0.9688 - val_loss: 0.2781 - val_accuracy: 0.8305
Epoch 682/1000
2/2 [==============================] - ETA: 0s - loss: 0.1391 - accuracy: 0.9219
Epoch 682: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1391 - accuracy: 0.9219 - val_loss: 0.2770 - val_accuracy: 0.8305
Epoch 683/1000
2/2 [==============================] - ETA: 0s - loss: 0.0833 - accuracy: 0.9875
Epoch 683: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0833 - accuracy: 0.9875 - val_loss: 0.2774 - val_accuracy: 0.8305
Epoch 684/1000
2/2 [==============================] - ETA: 0s - loss: 0.1212 - accuracy: 0.9375
Epoch 684: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 999ms/step - loss: 0.1212 - accuracy: 0.9375 - val_loss: 0.2778 - val_accuracy: 0.8305
Epoch 685/1000
2/2 [==============================] - ETA: 0s - loss: 0.1233 - accuracy: 0.9531
Epoch 685: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1233 - accuracy: 0.9531 - val_loss: 0.2769 - val_accuracy: 0.8305
Epoch 686/1000
2/2 [==============================] - ETA: 0s - loss: 0.1080 - accuracy: 0.9609
Epoch 686: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1080 - accuracy: 0.9609 - val_loss: 0.2748 - val_accuracy: 0.8305
Epoch 687/1000
2/2 [==============================] - ETA: 0s - loss: 0.1526 - accuracy: 0.9125
Epoch 687: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1526 - accuracy: 0.9125 - val_loss: 0.2761 - val_accuracy: 0.8305
Epoch 688/1000
2/2 [==============================] - ETA: 0s - loss: 0.1283 - accuracy: 0.9375
Epoch 688: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1283 - accuracy: 0.9375 - val_loss: 0.2777 - val_accuracy: 0.8305
Epoch 689/1000
2/2 [==============================] - ETA: 0s - loss: 0.1500 - accuracy: 0.9375
Epoch 689: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 831ms/step - loss: 0.1500 - accuracy: 0.9375 - val_loss: 0.2809 - val_accuracy: 0.8305
Epoch 690/1000
2/2 [==============================] - ETA: 0s - loss: 0.1213 - accuracy: 0.9375
Epoch 690: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1213 - accuracy: 0.9375 - val_loss: 0.2837 - val_accuracy: 0.8305
Epoch 691/1000
2/2 [==============================] - ETA: 0s - loss: 0.1150 - accuracy: 0.9531
Epoch 691: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1150 - accuracy: 0.9531 - val_loss: 0.2858 - val_accuracy: 0.8305
Epoch 692/1000
2/2 [==============================] - ETA: 0s - loss: 0.0847 - accuracy: 0.9766
Epoch 692: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0847 - accuracy: 0.9766 - val_loss: 0.2873 - val_accuracy: 0.8305
Epoch 693/1000
2/2 [==============================] - ETA: 0s - loss: 0.1106 - accuracy: 0.9625
Epoch 693: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1106 - accuracy: 0.9625 - val_loss: 0.2868 - val_accuracy: 0.8305
Epoch 694/1000
2/2 [==============================] - ETA: 0s - loss: 0.1030 - accuracy: 0.9750
Epoch 694: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 833ms/step - loss: 0.1030 - accuracy: 0.9750 - val_loss: 0.2863 - val_accuracy: 0.8305
Epoch 695/1000
2/2 [==============================] - ETA: 0s - loss: 0.1061 - accuracy: 0.9531
Epoch 695: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 955ms/step - loss: 0.1061 - accuracy: 0.9531 - val_loss: 0.2856 - val_accuracy: 0.8305
Epoch 696/1000
2/2 [==============================] - ETA: 0s - loss: 0.1274 - accuracy: 0.9297
Epoch 696: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1274 - accuracy: 0.9297 - val_loss: 0.2846 - val_accuracy: 0.8305
Epoch 697/1000
2/2 [==============================] - ETA: 0s - loss: 0.1182 - accuracy: 0.9531
Epoch 697: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1182 - accuracy: 0.9531 - val_loss: 0.2838 - val_accuracy: 0.8305
Epoch 698/1000
2/2 [==============================] - ETA: 0s - loss: 0.1083 - accuracy: 0.9453
Epoch 698: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1083 - accuracy: 0.9453 - val_loss: 0.2828 - val_accuracy: 0.8305
Epoch 699/1000
2/2 [==============================] - ETA: 0s - loss: 0.1175 - accuracy: 0.9531
Epoch 699: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1175 - accuracy: 0.9531 - val_loss: 0.2830 - val_accuracy: 0.8305
Epoch 700/1000
2/2 [==============================] - ETA: 0s - loss: 0.1411 - accuracy: 0.9297
Epoch 700: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 957ms/step - loss: 0.1411 - accuracy: 0.9297 - val_loss: 0.2833 - val_accuracy: 0.8305
Epoch 701/1000
2/2 [==============================] - ETA: 0s - loss: 0.1243 - accuracy: 0.9453
Epoch 701: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1243 - accuracy: 0.9453 - val_loss: 0.2845 - val_accuracy: 0.8305
Epoch 702/1000
2/2 [==============================] - ETA: 0s - loss: 0.1150 - accuracy: 0.9500
Epoch 702: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 861ms/step - loss: 0.1150 - accuracy: 0.9500 - val_loss: 0.2868 - val_accuracy: 0.8305
Epoch 703/1000
2/2 [==============================] - ETA: 0s - loss: 0.1140 - accuracy: 0.9250
Epoch 703: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1140 - accuracy: 0.9250 - val_loss: 0.2885 - val_accuracy: 0.8305
Epoch 704/1000
2/2 [==============================] - ETA: 0s - loss: 0.1070 - accuracy: 0.9531
Epoch 704: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 934ms/step - loss: 0.1070 - accuracy: 0.9531 - val_loss: 0.2881 - val_accuracy: 0.8305
Epoch 705/1000
2/2 [==============================] - ETA: 0s - loss: 0.1123 - accuracy: 0.9625
Epoch 705: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1123 - accuracy: 0.9625 - val_loss: 0.2871 - val_accuracy: 0.8305
Epoch 706/1000
2/2 [==============================] - ETA: 0s - loss: 0.1124 - accuracy: 0.9453
Epoch 706: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1124 - accuracy: 0.9453 - val_loss: 0.2852 - val_accuracy: 0.8305
Epoch 707/1000
2/2 [==============================] - ETA: 0s - loss: 0.0818 - accuracy: 0.9531
Epoch 707: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0818 - accuracy: 0.9531 - val_loss: 0.2834 - val_accuracy: 0.8305
Epoch 708/1000
2/2 [==============================] - ETA: 0s - loss: 0.0923 - accuracy: 1.0000
Epoch 708: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 846ms/step - loss: 0.0923 - accuracy: 1.0000 - val_loss: 0.2816 - val_accuracy: 0.8305
Epoch 709/1000
2/2 [==============================] - ETA: 0s - loss: 0.1267 - accuracy: 0.9297
Epoch 709: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1267 - accuracy: 0.9297 - val_loss: 0.2808 - val_accuracy: 0.8305
Epoch 710/1000
2/2 [==============================] - ETA: 0s - loss: 0.1103 - accuracy: 0.9500
Epoch 710: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1103 - accuracy: 0.9500 - val_loss: 0.2803 - val_accuracy: 0.8305
Epoch 711/1000
2/2 [==============================] - ETA: 0s - loss: 0.1186 - accuracy: 0.9453
Epoch 711: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 957ms/step - loss: 0.1186 - accuracy: 0.9453 - val_loss: 0.2794 - val_accuracy: 0.8305
Epoch 712/1000
2/2 [==============================] - ETA: 0s - loss: 0.1164 - accuracy: 0.9500
Epoch 712: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 888ms/step - loss: 0.1164 - accuracy: 0.9500 - val_loss: 0.2793 - val_accuracy: 0.8305
Epoch 713/1000
2/2 [==============================] - ETA: 0s - loss: 0.1329 - accuracy: 0.9453
Epoch 713: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 920ms/step - loss: 0.1329 - accuracy: 0.9453 - val_loss: 0.2797 - val_accuracy: 0.8305
Epoch 714/1000
2/2 [==============================] - ETA: 0s - loss: 0.1029 - accuracy: 0.9453
Epoch 714: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1029 - accuracy: 0.9453 - val_loss: 0.2799 - val_accuracy: 0.8305
Epoch 715/1000
2/2 [==============================] - ETA: 0s - loss: 0.0814 - accuracy: 0.9750
Epoch 715: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0814 - accuracy: 0.9750 - val_loss: 0.2799 - val_accuracy: 0.8305
Epoch 716/1000
2/2 [==============================] - ETA: 0s - loss: 0.1071 - accuracy: 0.9609
Epoch 716: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 936ms/step - loss: 0.1071 - accuracy: 0.9609 - val_loss: 0.2795 - val_accuracy: 0.8475
Epoch 717/1000
2/2 [==============================] - ETA: 0s - loss: 0.0719 - accuracy: 1.0000
Epoch 717: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.2809 - val_accuracy: 0.8305
Epoch 718/1000
2/2 [==============================] - ETA: 0s - loss: 0.1597 - accuracy: 0.9375
Epoch 718: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 821ms/step - loss: 0.1597 - accuracy: 0.9375 - val_loss: 0.2791 - val_accuracy: 0.8475
Epoch 719/1000
2/2 [==============================] - ETA: 0s - loss: 0.1307 - accuracy: 0.9750
Epoch 719: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 834ms/step - loss: 0.1307 - accuracy: 0.9750 - val_loss: 0.2759 - val_accuracy: 0.8475
Epoch 720/1000
2/2 [==============================] - ETA: 0s - loss: 0.0994 - accuracy: 0.9922
Epoch 720: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0994 - accuracy: 0.9922 - val_loss: 0.2731 - val_accuracy: 0.8475
Epoch 721/1000
2/2 [==============================] - ETA: 0s - loss: 0.1031 - accuracy: 0.9750
Epoch 721: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 859ms/step - loss: 0.1031 - accuracy: 0.9750 - val_loss: 0.2718 - val_accuracy: 0.8475
Epoch 722/1000
2/2 [==============================] - ETA: 0s - loss: 0.1109 - accuracy: 0.9375
Epoch 722: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 832ms/step - loss: 0.1109 - accuracy: 0.9375 - val_loss: 0.2699 - val_accuracy: 0.8475
Epoch 723/1000
2/2 [==============================] - ETA: 0s - loss: 0.0936 - accuracy: 0.9500
Epoch 723: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0936 - accuracy: 0.9500 - val_loss: 0.2673 - val_accuracy: 0.8475
Epoch 724/1000
2/2 [==============================] - ETA: 0s - loss: 0.1319 - accuracy: 0.9500
Epoch 724: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1319 - accuracy: 0.9500 - val_loss: 0.2645 - val_accuracy: 0.8475
Epoch 725/1000
2/2 [==============================] - ETA: 0s - loss: 0.1114 - accuracy: 0.9375
Epoch 725: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1114 - accuracy: 0.9375 - val_loss: 0.2619 - val_accuracy: 0.8475
Epoch 726/1000
2/2 [==============================] - ETA: 0s - loss: 0.0872 - accuracy: 0.9875
Epoch 726: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 805ms/step - loss: 0.0872 - accuracy: 0.9875 - val_loss: 0.2602 - val_accuracy: 0.8475
Epoch 727/1000
2/2 [==============================] - ETA: 0s - loss: 0.1199 - accuracy: 0.9609
Epoch 727: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1199 - accuracy: 0.9609 - val_loss: 0.2602 - val_accuracy: 0.8475
Epoch 728/1000
2/2 [==============================] - ETA: 0s - loss: 0.1012 - accuracy: 0.9609
Epoch 728: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 926ms/step - loss: 0.1012 - accuracy: 0.9609 - val_loss: 0.2608 - val_accuracy: 0.8475
Epoch 729/1000
2/2 [==============================] - ETA: 0s - loss: 0.0955 - accuracy: 0.9750
Epoch 729: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0955 - accuracy: 0.9750 - val_loss: 0.2607 - val_accuracy: 0.8475
Epoch 730/1000
2/2 [==============================] - ETA: 0s - loss: 0.1248 - accuracy: 0.9297
Epoch 730: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 970ms/step - loss: 0.1248 - accuracy: 0.9297 - val_loss: 0.2611 - val_accuracy: 0.8475
Epoch 731/1000
2/2 [==============================] - ETA: 0s - loss: 0.1311 - accuracy: 0.9219
Epoch 731: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1311 - accuracy: 0.9219 - val_loss: 0.2610 - val_accuracy: 0.8475
Epoch 732/1000
2/2 [==============================] - ETA: 0s - loss: 0.1236 - accuracy: 0.9375
Epoch 732: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 812ms/step - loss: 0.1236 - accuracy: 0.9375 - val_loss: 0.2621 - val_accuracy: 0.8305
Epoch 733/1000
2/2 [==============================] - ETA: 0s - loss: 0.1027 - accuracy: 0.9609
Epoch 733: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 959ms/step - loss: 0.1027 - accuracy: 0.9609 - val_loss: 0.2639 - val_accuracy: 0.8305
Epoch 734/1000
2/2 [==============================] - ETA: 0s - loss: 0.1354 - accuracy: 0.9453
Epoch 734: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1354 - accuracy: 0.9453 - val_loss: 0.2655 - val_accuracy: 0.8305
Epoch 735/1000
2/2 [==============================] - ETA: 0s - loss: 0.1007 - accuracy: 0.9531
Epoch 735: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 940ms/step - loss: 0.1007 - accuracy: 0.9531 - val_loss: 0.2681 - val_accuracy: 0.8305
Epoch 736/1000
2/2 [==============================] - ETA: 0s - loss: 0.1023 - accuracy: 0.9609
Epoch 736: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1023 - accuracy: 0.9609 - val_loss: 0.2705 - val_accuracy: 0.8305
Epoch 737/1000
2/2 [==============================] - ETA: 0s - loss: 0.0855 - accuracy: 0.9688
Epoch 737: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 901ms/step - loss: 0.0855 - accuracy: 0.9688 - val_loss: 0.2720 - val_accuracy: 0.8305
Epoch 738/1000
2/2 [==============================] - ETA: 0s - loss: 0.1273 - accuracy: 0.9000
Epoch 738: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 838ms/step - loss: 0.1273 - accuracy: 0.9000 - val_loss: 0.2730 - val_accuracy: 0.8305
Epoch 739/1000
2/2 [==============================] - ETA: 0s - loss: 0.1079 - accuracy: 0.9250
Epoch 739: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1079 - accuracy: 0.9250 - val_loss: 0.2744 - val_accuracy: 0.8305
Epoch 740/1000
2/2 [==============================] - ETA: 0s - loss: 0.0813 - accuracy: 0.9922
Epoch 740: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0813 - accuracy: 0.9922 - val_loss: 0.2757 - val_accuracy: 0.8305
Epoch 741/1000
2/2 [==============================] - ETA: 0s - loss: 0.1141 - accuracy: 0.9500
Epoch 741: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 839ms/step - loss: 0.1141 - accuracy: 0.9500 - val_loss: 0.2759 - val_accuracy: 0.8305
Epoch 742/1000
2/2 [==============================] - ETA: 0s - loss: 0.0984 - accuracy: 0.9844
Epoch 742: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 951ms/step - loss: 0.0984 - accuracy: 0.9844 - val_loss: 0.2755 - val_accuracy: 0.8305
Epoch 743/1000
2/2 [==============================] - ETA: 0s - loss: 0.0862 - accuracy: 0.9609
Epoch 743: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0862 - accuracy: 0.9609 - val_loss: 0.2756 - val_accuracy: 0.8305
Epoch 744/1000
2/2 [==============================] - ETA: 0s - loss: 0.1266 - accuracy: 0.9453
Epoch 744: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 954ms/step - loss: 0.1266 - accuracy: 0.9453 - val_loss: 0.2753 - val_accuracy: 0.8305
Epoch 745/1000
2/2 [==============================] - ETA: 0s - loss: 0.0972 - accuracy: 0.9625
Epoch 745: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 838ms/step - loss: 0.0972 - accuracy: 0.9625 - val_loss: 0.2741 - val_accuracy: 0.8305
Epoch 746/1000
2/2 [==============================] - ETA: 0s - loss: 0.1272 - accuracy: 0.9375
Epoch 746: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1272 - accuracy: 0.9375 - val_loss: 0.2730 - val_accuracy: 0.8305
Epoch 747/1000
2/2 [==============================] - ETA: 0s - loss: 0.1130 - accuracy: 0.9250
Epoch 747: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 850ms/step - loss: 0.1130 - accuracy: 0.9250 - val_loss: 0.2731 - val_accuracy: 0.8305
Epoch 748/1000
2/2 [==============================] - ETA: 0s - loss: 0.1005 - accuracy: 0.9609
Epoch 748: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1005 - accuracy: 0.9609 - val_loss: 0.2731 - val_accuracy: 0.8305
Epoch 749/1000
2/2 [==============================] - ETA: 0s - loss: 0.1331 - accuracy: 0.9219
Epoch 749: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1331 - accuracy: 0.9219 - val_loss: 0.2735 - val_accuracy: 0.8305
Epoch 750/1000
2/2 [==============================] - ETA: 0s - loss: 0.0987 - accuracy: 0.9531
Epoch 750: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 948ms/step - loss: 0.0987 - accuracy: 0.9531 - val_loss: 0.2732 - val_accuracy: 0.8305
Epoch 751/1000
2/2 [==============================] - ETA: 0s - loss: 0.1306 - accuracy: 0.9625
Epoch 751: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1306 - accuracy: 0.9625 - val_loss: 0.2735 - val_accuracy: 0.8305
Epoch 752/1000
2/2 [==============================] - ETA: 0s - loss: 0.1052 - accuracy: 0.9609
Epoch 752: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1052 - accuracy: 0.9609 - val_loss: 0.2742 - val_accuracy: 0.8305
Epoch 753/1000
2/2 [==============================] - ETA: 0s - loss: 0.1138 - accuracy: 0.9531
Epoch 753: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1138 - accuracy: 0.9531 - val_loss: 0.2751 - val_accuracy: 0.8305
Epoch 754/1000
2/2 [==============================] - ETA: 0s - loss: 0.0997 - accuracy: 0.9688
Epoch 754: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0997 - accuracy: 0.9688 - val_loss: 0.2757 - val_accuracy: 0.8305
Epoch 755/1000
2/2 [==============================] - ETA: 0s - loss: 0.0910 - accuracy: 0.9766
Epoch 755: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 964ms/step - loss: 0.0910 - accuracy: 0.9766 - val_loss: 0.2760 - val_accuracy: 0.8305
Epoch 756/1000
2/2 [==============================] - ETA: 0s - loss: 0.0916 - accuracy: 0.9531
Epoch 756: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0916 - accuracy: 0.9531 - val_loss: 0.2756 - val_accuracy: 0.8305
Epoch 757/1000
2/2 [==============================] - ETA: 0s - loss: 0.0892 - accuracy: 0.9688
Epoch 757: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0892 - accuracy: 0.9688 - val_loss: 0.2744 - val_accuracy: 0.8305
Epoch 758/1000
2/2 [==============================] - ETA: 0s - loss: 0.1605 - accuracy: 0.9125
Epoch 758: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1605 - accuracy: 0.9125 - val_loss: 0.2720 - val_accuracy: 0.8475
Epoch 759/1000
2/2 [==============================] - ETA: 0s - loss: 0.1353 - accuracy: 0.9375
Epoch 759: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1353 - accuracy: 0.9375 - val_loss: 0.2697 - val_accuracy: 0.8475
Epoch 760/1000
2/2 [==============================] - ETA: 0s - loss: 0.0941 - accuracy: 0.9875
Epoch 760: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0941 - accuracy: 0.9875 - val_loss: 0.2682 - val_accuracy: 0.8475
Epoch 761/1000
2/2 [==============================] - ETA: 0s - loss: 0.0846 - accuracy: 0.9922
Epoch 761: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0846 - accuracy: 0.9922 - val_loss: 0.2674 - val_accuracy: 0.8475
Epoch 762/1000
2/2 [==============================] - ETA: 0s - loss: 0.0976 - accuracy: 0.9609
Epoch 762: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0976 - accuracy: 0.9609 - val_loss: 0.2673 - val_accuracy: 0.8475
Epoch 763/1000
2/2 [==============================] - ETA: 0s - loss: 0.0895 - accuracy: 0.9500
Epoch 763: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0895 - accuracy: 0.9500 - val_loss: 0.2657 - val_accuracy: 0.8475
Epoch 764/1000
2/2 [==============================] - ETA: 0s - loss: 0.0793 - accuracy: 0.9766
Epoch 764: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 985ms/step - loss: 0.0793 - accuracy: 0.9766 - val_loss: 0.2641 - val_accuracy: 0.8475
Epoch 765/1000
2/2 [==============================] - ETA: 0s - loss: 0.0875 - accuracy: 0.9688
Epoch 765: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 964ms/step - loss: 0.0875 - accuracy: 0.9688 - val_loss: 0.2638 - val_accuracy: 0.8475
Epoch 766/1000
2/2 [==============================] - ETA: 0s - loss: 0.1283 - accuracy: 0.9500
Epoch 766: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1283 - accuracy: 0.9500 - val_loss: 0.2612 - val_accuracy: 0.8475
Epoch 767/1000
2/2 [==============================] - ETA: 0s - loss: 0.1182 - accuracy: 0.9375
Epoch 767: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1182 - accuracy: 0.9375 - val_loss: 0.2574 - val_accuracy: 0.8475
Epoch 768/1000
2/2 [==============================] - ETA: 0s - loss: 0.0919 - accuracy: 0.9453
Epoch 768: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0919 - accuracy: 0.9453 - val_loss: 0.2547 - val_accuracy: 0.8475
Epoch 769/1000
2/2 [==============================] - ETA: 0s - loss: 0.1081 - accuracy: 0.9750
Epoch 769: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 847ms/step - loss: 0.1081 - accuracy: 0.9750 - val_loss: 0.2529 - val_accuracy: 0.8475
Epoch 770/1000
2/2 [==============================] - ETA: 0s - loss: 0.0646 - accuracy: 1.0000
Epoch 770: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 947ms/step - loss: 0.0646 - accuracy: 1.0000 - val_loss: 0.2518 - val_accuracy: 0.8475
Epoch 771/1000
2/2 [==============================] - ETA: 0s - loss: 0.1405 - accuracy: 0.9500
Epoch 771: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 851ms/step - loss: 0.1405 - accuracy: 0.9500 - val_loss: 0.2505 - val_accuracy: 0.8475
Epoch 772/1000
2/2 [==============================] - ETA: 0s - loss: 0.1141 - accuracy: 0.9531
Epoch 772: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 953ms/step - loss: 0.1141 - accuracy: 0.9531 - val_loss: 0.2495 - val_accuracy: 0.8475
Epoch 773/1000
2/2 [==============================] - ETA: 0s - loss: 0.0894 - accuracy: 0.9844
Epoch 773: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 964ms/step - loss: 0.0894 - accuracy: 0.9844 - val_loss: 0.2490 - val_accuracy: 0.8475
Epoch 774/1000
2/2 [==============================] - ETA: 0s - loss: 0.1010 - accuracy: 0.9875
Epoch 774: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1010 - accuracy: 0.9875 - val_loss: 0.2502 - val_accuracy: 0.8475
Epoch 775/1000
2/2 [==============================] - ETA: 0s - loss: 0.1218 - accuracy: 0.9500
Epoch 775: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1218 - accuracy: 0.9500 - val_loss: 0.2521 - val_accuracy: 0.8475
Epoch 776/1000
2/2 [==============================] - ETA: 0s - loss: 0.0885 - accuracy: 0.9750
Epoch 776: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 825ms/step - loss: 0.0885 - accuracy: 0.9750 - val_loss: 0.2556 - val_accuracy: 0.8475
Epoch 777/1000
2/2 [==============================] - ETA: 0s - loss: 0.1032 - accuracy: 0.9750
Epoch 777: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1032 - accuracy: 0.9750 - val_loss: 0.2587 - val_accuracy: 0.8475
Epoch 778/1000
2/2 [==============================] - ETA: 0s - loss: 0.1003 - accuracy: 0.9453
Epoch 778: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 946ms/step - loss: 0.1003 - accuracy: 0.9453 - val_loss: 0.2619 - val_accuracy: 0.8475
Epoch 779/1000
2/2 [==============================] - ETA: 0s - loss: 0.0924 - accuracy: 0.9500
Epoch 779: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 830ms/step - loss: 0.0924 - accuracy: 0.9500 - val_loss: 0.2652 - val_accuracy: 0.8475
Epoch 780/1000
2/2 [==============================] - ETA: 0s - loss: 0.1120 - accuracy: 0.9688
Epoch 780: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1120 - accuracy: 0.9688 - val_loss: 0.2678 - val_accuracy: 0.8475
Epoch 781/1000
2/2 [==============================] - ETA: 0s - loss: 0.1270 - accuracy: 0.9531
Epoch 781: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 962ms/step - loss: 0.1270 - accuracy: 0.9531 - val_loss: 0.2701 - val_accuracy: 0.8475
Epoch 782/1000
2/2 [==============================] - ETA: 0s - loss: 0.0972 - accuracy: 0.9531
Epoch 782: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 953ms/step - loss: 0.0972 - accuracy: 0.9531 - val_loss: 0.2720 - val_accuracy: 0.8475
Epoch 783/1000
2/2 [==============================] - ETA: 0s - loss: 0.1113 - accuracy: 0.9688
Epoch 783: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1113 - accuracy: 0.9688 - val_loss: 0.2752 - val_accuracy: 0.8305
Epoch 784/1000
2/2 [==============================] - ETA: 0s - loss: 0.0787 - accuracy: 0.9500
Epoch 784: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 819ms/step - loss: 0.0787 - accuracy: 0.9500 - val_loss: 0.2774 - val_accuracy: 0.8305
Epoch 785/1000
2/2 [==============================] - ETA: 0s - loss: 0.1063 - accuracy: 0.9875
Epoch 785: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 816ms/step - loss: 0.1063 - accuracy: 0.9875 - val_loss: 0.2791 - val_accuracy: 0.8305
Epoch 786/1000
2/2 [==============================] - ETA: 0s - loss: 0.0988 - accuracy: 0.9688
Epoch 786: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0988 - accuracy: 0.9688 - val_loss: 0.2820 - val_accuracy: 0.8305
Epoch 787/1000
2/2 [==============================] - ETA: 0s - loss: 0.1266 - accuracy: 0.9250
Epoch 787: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1266 - accuracy: 0.9250 - val_loss: 0.2833 - val_accuracy: 0.8136
Epoch 788/1000
2/2 [==============================] - ETA: 0s - loss: 0.1121 - accuracy: 0.9688
Epoch 788: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1121 - accuracy: 0.9688 - val_loss: 0.2839 - val_accuracy: 0.8136
Epoch 789/1000
2/2 [==============================] - ETA: 0s - loss: 0.1159 - accuracy: 0.9375
Epoch 789: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1159 - accuracy: 0.9375 - val_loss: 0.2841 - val_accuracy: 0.8136
Epoch 790/1000
2/2 [==============================] - ETA: 0s - loss: 0.1131 - accuracy: 0.9625
Epoch 790: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 853ms/step - loss: 0.1131 - accuracy: 0.9625 - val_loss: 0.2837 - val_accuracy: 0.8475
Epoch 791/1000
2/2 [==============================] - ETA: 0s - loss: 0.0619 - accuracy: 1.0000
Epoch 791: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0619 - accuracy: 1.0000 - val_loss: 0.2837 - val_accuracy: 0.8475
Epoch 792/1000
2/2 [==============================] - ETA: 0s - loss: 0.0737 - accuracy: 1.0000
Epoch 792: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0737 - accuracy: 1.0000 - val_loss: 0.2861 - val_accuracy: 0.8475
Epoch 793/1000
2/2 [==============================] - ETA: 0s - loss: 0.1128 - accuracy: 0.9750
Epoch 793: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1128 - accuracy: 0.9750 - val_loss: 0.2885 - val_accuracy: 0.8305
Epoch 794/1000
2/2 [==============================] - ETA: 0s - loss: 0.0624 - accuracy: 1.0000
Epoch 794: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0624 - accuracy: 1.0000 - val_loss: 0.2914 - val_accuracy: 0.8305
Epoch 795/1000
2/2 [==============================] - ETA: 0s - loss: 0.0935 - accuracy: 0.9609
Epoch 795: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0935 - accuracy: 0.9609 - val_loss: 0.2928 - val_accuracy: 0.8305
Epoch 796/1000
2/2 [==============================] - ETA: 0s - loss: 0.0912 - accuracy: 0.9625
Epoch 796: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 881ms/step - loss: 0.0912 - accuracy: 0.9625 - val_loss: 0.2941 - val_accuracy: 0.8305
Epoch 797/1000
2/2 [==============================] - ETA: 0s - loss: 0.0922 - accuracy: 0.9766
Epoch 797: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0922 - accuracy: 0.9766 - val_loss: 0.2936 - val_accuracy: 0.8475
Epoch 798/1000
2/2 [==============================] - ETA: 0s - loss: 0.1466 - accuracy: 0.9375
Epoch 798: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1466 - accuracy: 0.9375 - val_loss: 0.2921 - val_accuracy: 0.8475
Epoch 799/1000
2/2 [==============================] - ETA: 0s - loss: 0.0982 - accuracy: 0.9453
Epoch 799: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0982 - accuracy: 0.9453 - val_loss: 0.2880 - val_accuracy: 0.8475
Epoch 800/1000
2/2 [==============================] - ETA: 0s - loss: 0.0642 - accuracy: 1.0000
Epoch 800: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 980ms/step - loss: 0.0642 - accuracy: 1.0000 - val_loss: 0.2839 - val_accuracy: 0.8644
Epoch 801/1000
2/2 [==============================] - ETA: 0s - loss: 0.1012 - accuracy: 0.9875
Epoch 801: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1012 - accuracy: 0.9875 - val_loss: 0.2809 - val_accuracy: 0.8644
Epoch 802/1000
2/2 [==============================] - ETA: 0s - loss: 0.0896 - accuracy: 0.9750
Epoch 802: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 842ms/step - loss: 0.0896 - accuracy: 0.9750 - val_loss: 0.2776 - val_accuracy: 0.8644
Epoch 803/1000
2/2 [==============================] - ETA: 0s - loss: 0.1111 - accuracy: 0.9750
Epoch 803: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 905ms/step - loss: 0.1111 - accuracy: 0.9750 - val_loss: 0.2753 - val_accuracy: 0.8644
Epoch 804/1000
2/2 [==============================] - ETA: 0s - loss: 0.1032 - accuracy: 0.9688
Epoch 804: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 959ms/step - loss: 0.1032 - accuracy: 0.9688 - val_loss: 0.2732 - val_accuracy: 0.8644
Epoch 805/1000
2/2 [==============================] - ETA: 0s - loss: 0.1012 - accuracy: 0.9609
Epoch 805: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1012 - accuracy: 0.9609 - val_loss: 0.2717 - val_accuracy: 0.8644
Epoch 806/1000
2/2 [==============================] - ETA: 0s - loss: 0.1017 - accuracy: 0.9688
Epoch 806: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 960ms/step - loss: 0.1017 - accuracy: 0.9688 - val_loss: 0.2710 - val_accuracy: 0.8644
Epoch 807/1000
2/2 [==============================] - ETA: 0s - loss: 0.0986 - accuracy: 0.9688
Epoch 807: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 946ms/step - loss: 0.0986 - accuracy: 0.9688 - val_loss: 0.2702 - val_accuracy: 0.8644
Epoch 808/1000
2/2 [==============================] - ETA: 0s - loss: 0.1174 - accuracy: 0.9688
Epoch 808: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1174 - accuracy: 0.9688 - val_loss: 0.2693 - val_accuracy: 0.8644
Epoch 809/1000
2/2 [==============================] - ETA: 0s - loss: 0.0800 - accuracy: 0.9750
Epoch 809: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0800 - accuracy: 0.9750 - val_loss: 0.2683 - val_accuracy: 0.8475
Epoch 810/1000
2/2 [==============================] - ETA: 0s - loss: 0.1655 - accuracy: 0.8875
Epoch 810: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 849ms/step - loss: 0.1655 - accuracy: 0.8875 - val_loss: 0.2673 - val_accuracy: 0.8475
Epoch 811/1000
2/2 [==============================] - ETA: 0s - loss: 0.0940 - accuracy: 0.9750
Epoch 811: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0940 - accuracy: 0.9750 - val_loss: 0.2662 - val_accuracy: 0.8475
Epoch 812/1000
2/2 [==============================] - ETA: 0s - loss: 0.0860 - accuracy: 0.9750
Epoch 812: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0860 - accuracy: 0.9750 - val_loss: 0.2628 - val_accuracy: 0.8475
Epoch 813/1000
2/2 [==============================] - ETA: 0s - loss: 0.0997 - accuracy: 0.9297
Epoch 813: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 976ms/step - loss: 0.0997 - accuracy: 0.9297 - val_loss: 0.2612 - val_accuracy: 0.8475
Epoch 814/1000
2/2 [==============================] - ETA: 0s - loss: 0.1229 - accuracy: 0.9625
Epoch 814: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 847ms/step - loss: 0.1229 - accuracy: 0.9625 - val_loss: 0.2585 - val_accuracy: 0.8475
Epoch 815/1000
2/2 [==============================] - ETA: 0s - loss: 0.1036 - accuracy: 0.9500
Epoch 815: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 834ms/step - loss: 0.1036 - accuracy: 0.9500 - val_loss: 0.2557 - val_accuracy: 0.8475
Epoch 816/1000
2/2 [==============================] - ETA: 0s - loss: 0.0913 - accuracy: 0.9609
Epoch 816: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 980ms/step - loss: 0.0913 - accuracy: 0.9609 - val_loss: 0.2546 - val_accuracy: 0.8475
Epoch 817/1000
2/2 [==============================] - ETA: 0s - loss: 0.1231 - accuracy: 0.9375
Epoch 817: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1231 - accuracy: 0.9375 - val_loss: 0.2543 - val_accuracy: 0.8475
Epoch 818/1000
2/2 [==============================] - ETA: 0s - loss: 0.0968 - accuracy: 0.9750
Epoch 818: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0968 - accuracy: 0.9750 - val_loss: 0.2539 - val_accuracy: 0.8475
Epoch 819/1000
2/2 [==============================] - ETA: 0s - loss: 0.0983 - accuracy: 0.9688
Epoch 819: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0983 - accuracy: 0.9688 - val_loss: 0.2527 - val_accuracy: 0.8475
Epoch 820/1000
2/2 [==============================] - ETA: 0s - loss: 0.0990 - accuracy: 0.9766
Epoch 820: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 965ms/step - loss: 0.0990 - accuracy: 0.9766 - val_loss: 0.2513 - val_accuracy: 0.8475
Epoch 821/1000
2/2 [==============================] - ETA: 0s - loss: 0.0738 - accuracy: 0.9750
Epoch 821: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0738 - accuracy: 0.9750 - val_loss: 0.2507 - val_accuracy: 0.8475
Epoch 822/1000
2/2 [==============================] - ETA: 0s - loss: 0.1152 - accuracy: 0.9609
Epoch 822: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1152 - accuracy: 0.9609 - val_loss: 0.2488 - val_accuracy: 0.8475
Epoch 823/1000
2/2 [==============================] - ETA: 0s - loss: 0.0756 - accuracy: 0.9625
Epoch 823: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0756 - accuracy: 0.9625 - val_loss: 0.2470 - val_accuracy: 0.8475
Epoch 824/1000
2/2 [==============================] - ETA: 0s - loss: 0.0963 - accuracy: 0.9844
Epoch 824: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0963 - accuracy: 0.9844 - val_loss: 0.2454 - val_accuracy: 0.8475
Epoch 825/1000
2/2 [==============================] - ETA: 0s - loss: 0.1150 - accuracy: 0.9688
Epoch 825: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1150 - accuracy: 0.9688 - val_loss: 0.2448 - val_accuracy: 0.8475
Epoch 826/1000
2/2 [==============================] - ETA: 0s - loss: 0.1223 - accuracy: 0.9500
Epoch 826: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1223 - accuracy: 0.9500 - val_loss: 0.2419 - val_accuracy: 0.8644
Epoch 827/1000
2/2 [==============================] - ETA: 0s - loss: 0.0789 - accuracy: 0.9688
Epoch 827: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0789 - accuracy: 0.9688 - val_loss: 0.2401 - val_accuracy: 0.8644
Epoch 828/1000
2/2 [==============================] - ETA: 0s - loss: 0.0897 - accuracy: 0.9750
Epoch 828: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0897 - accuracy: 0.9750 - val_loss: 0.2401 - val_accuracy: 0.8644
Epoch 829/1000
2/2 [==============================] - ETA: 0s - loss: 0.1105 - accuracy: 0.9531
Epoch 829: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 938ms/step - loss: 0.1105 - accuracy: 0.9531 - val_loss: 0.2408 - val_accuracy: 0.8644
Epoch 830/1000
2/2 [==============================] - ETA: 0s - loss: 0.0924 - accuracy: 0.9609
Epoch 830: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0924 - accuracy: 0.9609 - val_loss: 0.2409 - val_accuracy: 0.8644
Epoch 831/1000
2/2 [==============================] - ETA: 0s - loss: 0.0712 - accuracy: 0.9688
Epoch 831: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0712 - accuracy: 0.9688 - val_loss: 0.2412 - val_accuracy: 0.8644
Epoch 832/1000
2/2 [==============================] - ETA: 0s - loss: 0.0620 - accuracy: 0.9750
Epoch 832: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 811ms/step - loss: 0.0620 - accuracy: 0.9750 - val_loss: 0.2411 - val_accuracy: 0.8644
Epoch 833/1000
2/2 [==============================] - ETA: 0s - loss: 0.1238 - accuracy: 0.9297
Epoch 833: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 949ms/step - loss: 0.1238 - accuracy: 0.9297 - val_loss: 0.2420 - val_accuracy: 0.8644
Epoch 834/1000
2/2 [==============================] - ETA: 0s - loss: 0.0821 - accuracy: 0.9844
Epoch 834: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0821 - accuracy: 0.9844 - val_loss: 0.2424 - val_accuracy: 0.8644
Epoch 835/1000
2/2 [==============================] - ETA: 0s - loss: 0.1200 - accuracy: 0.9375
Epoch 835: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 958ms/step - loss: 0.1200 - accuracy: 0.9375 - val_loss: 0.2430 - val_accuracy: 0.8644
Epoch 836/1000
2/2 [==============================] - ETA: 0s - loss: 0.1401 - accuracy: 0.9375
Epoch 836: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 933ms/step - loss: 0.1401 - accuracy: 0.9375 - val_loss: 0.2434 - val_accuracy: 0.8644
Epoch 837/1000
2/2 [==============================] - ETA: 0s - loss: 0.0621 - accuracy: 0.9922
Epoch 837: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0621 - accuracy: 0.9922 - val_loss: 0.2446 - val_accuracy: 0.8644
Epoch 838/1000
2/2 [==============================] - ETA: 0s - loss: 0.1004 - accuracy: 0.9500
Epoch 838: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 817ms/step - loss: 0.1004 - accuracy: 0.9500 - val_loss: 0.2464 - val_accuracy: 0.8644
Epoch 839/1000
2/2 [==============================] - ETA: 0s - loss: 0.0905 - accuracy: 0.9766
Epoch 839: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0905 - accuracy: 0.9766 - val_loss: 0.2481 - val_accuracy: 0.8644
Epoch 840/1000
2/2 [==============================] - ETA: 0s - loss: 0.1004 - accuracy: 0.9500
Epoch 840: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 887ms/step - loss: 0.1004 - accuracy: 0.9500 - val_loss: 0.2505 - val_accuracy: 0.8644
Epoch 841/1000
2/2 [==============================] - ETA: 0s - loss: 0.1146 - accuracy: 0.9750
Epoch 841: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1146 - accuracy: 0.9750 - val_loss: 0.2507 - val_accuracy: 0.8644
Epoch 842/1000
2/2 [==============================] - ETA: 0s - loss: 0.0898 - accuracy: 0.9844
Epoch 842: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0898 - accuracy: 0.9844 - val_loss: 0.2503 - val_accuracy: 0.8644
Epoch 843/1000
2/2 [==============================] - ETA: 0s - loss: 0.1224 - accuracy: 0.9375
Epoch 843: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1224 - accuracy: 0.9375 - val_loss: 0.2509 - val_accuracy: 0.8644
Epoch 844/1000
2/2 [==============================] - ETA: 0s - loss: 0.0545 - accuracy: 0.9875
Epoch 844: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 848ms/step - loss: 0.0545 - accuracy: 0.9875 - val_loss: 0.2514 - val_accuracy: 0.8644
Epoch 845/1000
2/2 [==============================] - ETA: 0s - loss: 0.1240 - accuracy: 0.9250
Epoch 845: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1240 - accuracy: 0.9250 - val_loss: 0.2505 - val_accuracy: 0.8644
Epoch 846/1000
2/2 [==============================] - ETA: 0s - loss: 0.1128 - accuracy: 0.9750
Epoch 846: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1128 - accuracy: 0.9750 - val_loss: 0.2508 - val_accuracy: 0.8644
Epoch 847/1000
2/2 [==============================] - ETA: 0s - loss: 0.0841 - accuracy: 0.9500
Epoch 847: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0841 - accuracy: 0.9500 - val_loss: 0.2514 - val_accuracy: 0.8644
Epoch 848/1000
2/2 [==============================] - ETA: 0s - loss: 0.0703 - accuracy: 0.9844
Epoch 848: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 928ms/step - loss: 0.0703 - accuracy: 0.9844 - val_loss: 0.2520 - val_accuracy: 0.8644
Epoch 849/1000
2/2 [==============================] - ETA: 0s - loss: 0.0979 - accuracy: 0.9531
Epoch 849: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0979 - accuracy: 0.9531 - val_loss: 0.2536 - val_accuracy: 0.8644
Epoch 850/1000
2/2 [==============================] - ETA: 0s - loss: 0.0953 - accuracy: 0.9750
Epoch 850: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 842ms/step - loss: 0.0953 - accuracy: 0.9750 - val_loss: 0.2552 - val_accuracy: 0.8644
Epoch 851/1000
2/2 [==============================] - ETA: 0s - loss: 0.0794 - accuracy: 0.9750
Epoch 851: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 833ms/step - loss: 0.0794 - accuracy: 0.9750 - val_loss: 0.2572 - val_accuracy: 0.8644
Epoch 852/1000
2/2 [==============================] - ETA: 0s - loss: 0.0963 - accuracy: 0.9688
Epoch 852: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0963 - accuracy: 0.9688 - val_loss: 0.2586 - val_accuracy: 0.8644
Epoch 853/1000
2/2 [==============================] - ETA: 0s - loss: 0.0843 - accuracy: 0.9625
Epoch 853: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0843 - accuracy: 0.9625 - val_loss: 0.2596 - val_accuracy: 0.8644
Epoch 854/1000
2/2 [==============================] - ETA: 0s - loss: 0.1328 - accuracy: 0.9453
Epoch 854: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1328 - accuracy: 0.9453 - val_loss: 0.2612 - val_accuracy: 0.8644
Epoch 855/1000
2/2 [==============================] - ETA: 0s - loss: 0.1115 - accuracy: 0.9453
Epoch 855: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1115 - accuracy: 0.9453 - val_loss: 0.2625 - val_accuracy: 0.8644
Epoch 856/1000
2/2 [==============================] - ETA: 0s - loss: 0.0815 - accuracy: 0.9750
Epoch 856: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 881ms/step - loss: 0.0815 - accuracy: 0.9750 - val_loss: 0.2628 - val_accuracy: 0.8644
Epoch 857/1000
2/2 [==============================] - ETA: 0s - loss: 0.0965 - accuracy: 0.9609
Epoch 857: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0965 - accuracy: 0.9609 - val_loss: 0.2621 - val_accuracy: 0.8644
Epoch 858/1000
2/2 [==============================] - ETA: 0s - loss: 0.0653 - accuracy: 0.9844
Epoch 858: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0653 - accuracy: 0.9844 - val_loss: 0.2615 - val_accuracy: 0.8644
Epoch 859/1000
2/2 [==============================] - ETA: 0s - loss: 0.0777 - accuracy: 0.9844
Epoch 859: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 947ms/step - loss: 0.0777 - accuracy: 0.9844 - val_loss: 0.2625 - val_accuracy: 0.8644
Epoch 860/1000
2/2 [==============================] - ETA: 0s - loss: 0.0645 - accuracy: 0.9750
Epoch 860: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0645 - accuracy: 0.9750 - val_loss: 0.2642 - val_accuracy: 0.8644
Epoch 861/1000
2/2 [==============================] - ETA: 0s - loss: 0.0972 - accuracy: 0.9531
Epoch 861: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0972 - accuracy: 0.9531 - val_loss: 0.2652 - val_accuracy: 0.8644
Epoch 862/1000
2/2 [==============================] - ETA: 0s - loss: 0.0886 - accuracy: 0.9750
Epoch 862: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 864ms/step - loss: 0.0886 - accuracy: 0.9750 - val_loss: 0.2662 - val_accuracy: 0.8644
Epoch 863/1000
2/2 [==============================] - ETA: 0s - loss: 0.0888 - accuracy: 0.9625
Epoch 863: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0888 - accuracy: 0.9625 - val_loss: 0.2676 - val_accuracy: 0.8644
Epoch 864/1000
2/2 [==============================] - ETA: 0s - loss: 0.0918 - accuracy: 0.9297
Epoch 864: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 955ms/step - loss: 0.0918 - accuracy: 0.9297 - val_loss: 0.2694 - val_accuracy: 0.8644
Epoch 865/1000
2/2 [==============================] - ETA: 0s - loss: 0.0777 - accuracy: 0.9750
Epoch 865: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 840ms/step - loss: 0.0777 - accuracy: 0.9750 - val_loss: 0.2710 - val_accuracy: 0.8644
Epoch 866/1000
2/2 [==============================] - ETA: 0s - loss: 0.0713 - accuracy: 0.9844
Epoch 866: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0713 - accuracy: 0.9844 - val_loss: 0.2715 - val_accuracy: 0.8644
Epoch 867/1000
2/2 [==============================] - ETA: 0s - loss: 0.0677 - accuracy: 0.9750
Epoch 867: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 840ms/step - loss: 0.0677 - accuracy: 0.9750 - val_loss: 0.2721 - val_accuracy: 0.8644
Epoch 868/1000
2/2 [==============================] - ETA: 0s - loss: 0.0762 - accuracy: 0.9625
Epoch 868: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0762 - accuracy: 0.9625 - val_loss: 0.2707 - val_accuracy: 0.8644
Epoch 869/1000
2/2 [==============================] - ETA: 0s - loss: 0.0939 - accuracy: 0.9875
Epoch 869: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 871ms/step - loss: 0.0939 - accuracy: 0.9875 - val_loss: 0.2699 - val_accuracy: 0.8644
Epoch 870/1000
2/2 [==============================] - ETA: 0s - loss: 0.0782 - accuracy: 0.9875
Epoch 870: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 839ms/step - loss: 0.0782 - accuracy: 0.9875 - val_loss: 0.2694 - val_accuracy: 0.8644
Epoch 871/1000
2/2 [==============================] - ETA: 0s - loss: 0.0965 - accuracy: 0.9531
Epoch 871: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 962ms/step - loss: 0.0965 - accuracy: 0.9531 - val_loss: 0.2689 - val_accuracy: 0.8644
Epoch 872/1000
2/2 [==============================] - ETA: 0s - loss: 0.0861 - accuracy: 0.9625
Epoch 872: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0861 - accuracy: 0.9625 - val_loss: 0.2691 - val_accuracy: 0.8644
Epoch 873/1000
2/2 [==============================] - ETA: 0s - loss: 0.0783 - accuracy: 0.9609
Epoch 873: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 937ms/step - loss: 0.0783 - accuracy: 0.9609 - val_loss: 0.2699 - val_accuracy: 0.8644
Epoch 874/1000
2/2 [==============================] - ETA: 0s - loss: 0.1119 - accuracy: 0.9688
Epoch 874: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1119 - accuracy: 0.9688 - val_loss: 0.2719 - val_accuracy: 0.8644
Epoch 875/1000
2/2 [==============================] - ETA: 0s - loss: 0.0761 - accuracy: 0.9500
Epoch 875: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0761 - accuracy: 0.9500 - val_loss: 0.2753 - val_accuracy: 0.8644
Epoch 876/1000
2/2 [==============================] - ETA: 0s - loss: 0.0681 - accuracy: 0.9875
Epoch 876: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 824ms/step - loss: 0.0681 - accuracy: 0.9875 - val_loss: 0.2789 - val_accuracy: 0.8644
Epoch 877/1000
2/2 [==============================] - ETA: 0s - loss: 0.0823 - accuracy: 0.9844
Epoch 877: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0823 - accuracy: 0.9844 - val_loss: 0.2809 - val_accuracy: 0.8644
Epoch 878/1000
2/2 [==============================] - ETA: 0s - loss: 0.0974 - accuracy: 0.9750
Epoch 878: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 921ms/step - loss: 0.0974 - accuracy: 0.9750 - val_loss: 0.2807 - val_accuracy: 0.8644
Epoch 879/1000
2/2 [==============================] - ETA: 0s - loss: 0.0780 - accuracy: 0.9750
Epoch 879: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0780 - accuracy: 0.9750 - val_loss: 0.2798 - val_accuracy: 0.8644
Epoch 880/1000
2/2 [==============================] - ETA: 0s - loss: 0.0934 - accuracy: 0.9609
Epoch 880: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0934 - accuracy: 0.9609 - val_loss: 0.2805 - val_accuracy: 0.8644
Epoch 881/1000
2/2 [==============================] - ETA: 0s - loss: 0.0931 - accuracy: 0.9609
Epoch 881: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0931 - accuracy: 0.9609 - val_loss: 0.2824 - val_accuracy: 0.8644
Epoch 882/1000
2/2 [==============================] - ETA: 0s - loss: 0.0906 - accuracy: 0.9688
Epoch 882: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 947ms/step - loss: 0.0906 - accuracy: 0.9688 - val_loss: 0.2839 - val_accuracy: 0.8644
Epoch 883/1000
2/2 [==============================] - ETA: 0s - loss: 0.1245 - accuracy: 0.9141
Epoch 883: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1245 - accuracy: 0.9141 - val_loss: 0.2849 - val_accuracy: 0.8644
Epoch 884/1000
2/2 [==============================] - ETA: 0s - loss: 0.0833 - accuracy: 0.9500
Epoch 884: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0833 - accuracy: 0.9500 - val_loss: 0.2872 - val_accuracy: 0.8644
Epoch 885/1000
2/2 [==============================] - ETA: 0s - loss: 0.0882 - accuracy: 0.9766
Epoch 885: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 981ms/step - loss: 0.0882 - accuracy: 0.9766 - val_loss: 0.2888 - val_accuracy: 0.8644
Epoch 886/1000
2/2 [==============================] - ETA: 0s - loss: 0.0874 - accuracy: 0.9844
Epoch 886: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 970ms/step - loss: 0.0874 - accuracy: 0.9844 - val_loss: 0.2896 - val_accuracy: 0.8644
Epoch 887/1000
2/2 [==============================] - ETA: 0s - loss: 0.0693 - accuracy: 0.9750
Epoch 887: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 837ms/step - loss: 0.0693 - accuracy: 0.9750 - val_loss: 0.2900 - val_accuracy: 0.8644
Epoch 888/1000
2/2 [==============================] - ETA: 0s - loss: 0.1022 - accuracy: 0.9375
Epoch 888: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 819ms/step - loss: 0.1022 - accuracy: 0.9375 - val_loss: 0.2897 - val_accuracy: 0.8644
Epoch 889/1000
2/2 [==============================] - ETA: 0s - loss: 0.0957 - accuracy: 0.9750
Epoch 889: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 844ms/step - loss: 0.0957 - accuracy: 0.9750 - val_loss: 0.2891 - val_accuracy: 0.8644
Epoch 890/1000
2/2 [==============================] - ETA: 0s - loss: 0.1106 - accuracy: 0.9531
Epoch 890: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1106 - accuracy: 0.9531 - val_loss: 0.2846 - val_accuracy: 0.8644
Epoch 891/1000
2/2 [==============================] - ETA: 0s - loss: 0.0942 - accuracy: 0.9609
Epoch 891: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0942 - accuracy: 0.9609 - val_loss: 0.2803 - val_accuracy: 0.8644
Epoch 892/1000
2/2 [==============================] - ETA: 0s - loss: 0.1219 - accuracy: 0.9453
Epoch 892: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1219 - accuracy: 0.9453 - val_loss: 0.2752 - val_accuracy: 0.8644
Epoch 893/1000
2/2 [==============================] - ETA: 0s - loss: 0.0828 - accuracy: 0.9750
Epoch 893: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0828 - accuracy: 0.9750 - val_loss: 0.2698 - val_accuracy: 0.8644
Epoch 894/1000
2/2 [==============================] - ETA: 0s - loss: 0.1041 - accuracy: 0.9375
Epoch 894: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1041 - accuracy: 0.9375 - val_loss: 0.2643 - val_accuracy: 0.8644
Epoch 895/1000
2/2 [==============================] - ETA: 0s - loss: 0.0839 - accuracy: 0.9500
Epoch 895: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 834ms/step - loss: 0.0839 - accuracy: 0.9500 - val_loss: 0.2609 - val_accuracy: 0.8644
Epoch 896/1000
2/2 [==============================] - ETA: 0s - loss: 0.1266 - accuracy: 0.9375
Epoch 896: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 972ms/step - loss: 0.1266 - accuracy: 0.9375 - val_loss: 0.2591 - val_accuracy: 0.8644
Epoch 897/1000
2/2 [==============================] - ETA: 0s - loss: 0.0911 - accuracy: 0.9531
Epoch 897: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0911 - accuracy: 0.9531 - val_loss: 0.2583 - val_accuracy: 0.8475
Epoch 898/1000
2/2 [==============================] - ETA: 0s - loss: 0.1015 - accuracy: 0.9500
Epoch 898: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 866ms/step - loss: 0.1015 - accuracy: 0.9500 - val_loss: 0.2576 - val_accuracy: 0.8475
Epoch 899/1000
2/2 [==============================] - ETA: 0s - loss: 0.0907 - accuracy: 0.9766
Epoch 899: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0907 - accuracy: 0.9766 - val_loss: 0.2573 - val_accuracy: 0.8475
Epoch 900/1000
2/2 [==============================] - ETA: 0s - loss: 0.0948 - accuracy: 0.9609
Epoch 900: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0948 - accuracy: 0.9609 - val_loss: 0.2570 - val_accuracy: 0.8475
Epoch 901/1000
2/2 [==============================] - ETA: 0s - loss: 0.1040 - accuracy: 0.9750
Epoch 901: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 819ms/step - loss: 0.1040 - accuracy: 0.9750 - val_loss: 0.2567 - val_accuracy: 0.8475
Epoch 902/1000
2/2 [==============================] - ETA: 0s - loss: 0.1039 - accuracy: 0.9141
Epoch 902: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1039 - accuracy: 0.9141 - val_loss: 0.2574 - val_accuracy: 0.8475
Epoch 903/1000
2/2 [==============================] - ETA: 0s - loss: 0.0861 - accuracy: 0.9625
Epoch 903: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 829ms/step - loss: 0.0861 - accuracy: 0.9625 - val_loss: 0.2590 - val_accuracy: 0.8475
Epoch 904/1000
2/2 [==============================] - ETA: 0s - loss: 0.0647 - accuracy: 0.9875
Epoch 904: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0647 - accuracy: 0.9875 - val_loss: 0.2597 - val_accuracy: 0.8475
Epoch 905/1000
2/2 [==============================] - ETA: 0s - loss: 0.0822 - accuracy: 0.9500
Epoch 905: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0822 - accuracy: 0.9500 - val_loss: 0.2606 - val_accuracy: 0.8475
Epoch 906/1000
2/2 [==============================] - ETA: 0s - loss: 0.0629 - accuracy: 0.9750
Epoch 906: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 851ms/step - loss: 0.0629 - accuracy: 0.9750 - val_loss: 0.2621 - val_accuracy: 0.8475
Epoch 907/1000
2/2 [==============================] - ETA: 0s - loss: 0.0631 - accuracy: 1.0000
Epoch 907: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0631 - accuracy: 1.0000 - val_loss: 0.2651 - val_accuracy: 0.8475
Epoch 908/1000
2/2 [==============================] - ETA: 0s - loss: 0.0794 - accuracy: 0.9875
Epoch 908: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0794 - accuracy: 0.9875 - val_loss: 0.2677 - val_accuracy: 0.8475
Epoch 909/1000
2/2 [==============================] - ETA: 0s - loss: 0.0681 - accuracy: 1.0000
Epoch 909: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0681 - accuracy: 1.0000 - val_loss: 0.2719 - val_accuracy: 0.8475
Epoch 910/1000
2/2 [==============================] - ETA: 0s - loss: 0.0788 - accuracy: 0.9531
Epoch 910: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0788 - accuracy: 0.9531 - val_loss: 0.2756 - val_accuracy: 0.8475
Epoch 911/1000
2/2 [==============================] - ETA: 0s - loss: 0.0893 - accuracy: 0.9531
Epoch 911: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 923ms/step - loss: 0.0893 - accuracy: 0.9531 - val_loss: 0.2787 - val_accuracy: 0.8475
Epoch 912/1000
2/2 [==============================] - ETA: 0s - loss: 0.1026 - accuracy: 0.9688
Epoch 912: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1026 - accuracy: 0.9688 - val_loss: 0.2811 - val_accuracy: 0.8475
Epoch 913/1000
2/2 [==============================] - ETA: 0s - loss: 0.0945 - accuracy: 0.9688
Epoch 913: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 937ms/step - loss: 0.0945 - accuracy: 0.9688 - val_loss: 0.2832 - val_accuracy: 0.8305
Epoch 914/1000
2/2 [==============================] - ETA: 0s - loss: 0.0744 - accuracy: 0.9750
Epoch 914: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0744 - accuracy: 0.9750 - val_loss: 0.2846 - val_accuracy: 0.8305
Epoch 915/1000
2/2 [==============================] - ETA: 0s - loss: 0.0825 - accuracy: 0.9500
Epoch 915: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0825 - accuracy: 0.9500 - val_loss: 0.2836 - val_accuracy: 0.8305
Epoch 916/1000
2/2 [==============================] - ETA: 0s - loss: 0.0687 - accuracy: 0.9875
Epoch 916: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0687 - accuracy: 0.9875 - val_loss: 0.2818 - val_accuracy: 0.8305
Epoch 917/1000
2/2 [==============================] - ETA: 0s - loss: 0.1094 - accuracy: 0.9500
Epoch 917: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 841ms/step - loss: 0.1094 - accuracy: 0.9500 - val_loss: 0.2799 - val_accuracy: 0.8475
Epoch 918/1000
2/2 [==============================] - ETA: 0s - loss: 0.0705 - accuracy: 0.9875
Epoch 918: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 891ms/step - loss: 0.0705 - accuracy: 0.9875 - val_loss: 0.2781 - val_accuracy: 0.8475
Epoch 919/1000
2/2 [==============================] - ETA: 0s - loss: 0.0739 - accuracy: 0.9750
Epoch 919: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 844ms/step - loss: 0.0739 - accuracy: 0.9750 - val_loss: 0.2760 - val_accuracy: 0.8475
Epoch 920/1000
2/2 [==============================] - ETA: 0s - loss: 0.0654 - accuracy: 0.9875
Epoch 920: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 819ms/step - loss: 0.0654 - accuracy: 0.9875 - val_loss: 0.2761 - val_accuracy: 0.8475
Epoch 921/1000
2/2 [==============================] - ETA: 0s - loss: 0.1149 - accuracy: 0.9453
Epoch 921: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1149 - accuracy: 0.9453 - val_loss: 0.2791 - val_accuracy: 0.8305
Epoch 922/1000
2/2 [==============================] - ETA: 0s - loss: 0.0815 - accuracy: 0.9750
Epoch 922: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 840ms/step - loss: 0.0815 - accuracy: 0.9750 - val_loss: 0.2815 - val_accuracy: 0.8305
Epoch 923/1000
2/2 [==============================] - ETA: 0s - loss: 0.1019 - accuracy: 0.9766
Epoch 923: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1019 - accuracy: 0.9766 - val_loss: 0.2835 - val_accuracy: 0.8305
Epoch 924/1000
2/2 [==============================] - ETA: 0s - loss: 0.0601 - accuracy: 1.0000
Epoch 924: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0601 - accuracy: 1.0000 - val_loss: 0.2857 - val_accuracy: 0.8305
Epoch 925/1000
2/2 [==============================] - ETA: 0s - loss: 0.1296 - accuracy: 0.9125
Epoch 925: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 839ms/step - loss: 0.1296 - accuracy: 0.9125 - val_loss: 0.2871 - val_accuracy: 0.8305
Epoch 926/1000
2/2 [==============================] - ETA: 0s - loss: 0.0943 - accuracy: 0.9766
Epoch 926: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0943 - accuracy: 0.9766 - val_loss: 0.2907 - val_accuracy: 0.8305
Epoch 927/1000
2/2 [==============================] - ETA: 0s - loss: 0.0939 - accuracy: 0.9766
Epoch 927: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0939 - accuracy: 0.9766 - val_loss: 0.2958 - val_accuracy: 0.8305
Epoch 928/1000
2/2 [==============================] - ETA: 0s - loss: 0.0990 - accuracy: 0.9625
Epoch 928: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0990 - accuracy: 0.9625 - val_loss: 0.2993 - val_accuracy: 0.8136
Epoch 929/1000
2/2 [==============================] - ETA: 0s - loss: 0.0945 - accuracy: 0.9609
Epoch 929: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0945 - accuracy: 0.9609 - val_loss: 0.3029 - val_accuracy: 0.8136
Epoch 930/1000
2/2 [==============================] - ETA: 0s - loss: 0.0748 - accuracy: 0.9844
Epoch 930: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0748 - accuracy: 0.9844 - val_loss: 0.3062 - val_accuracy: 0.8136
Epoch 931/1000
2/2 [==============================] - ETA: 0s - loss: 0.0828 - accuracy: 0.9766
Epoch 931: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0828 - accuracy: 0.9766 - val_loss: 0.3082 - val_accuracy: 0.8136
Epoch 932/1000
2/2 [==============================] - ETA: 0s - loss: 0.1561 - accuracy: 0.9500
Epoch 932: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 902ms/step - loss: 0.1561 - accuracy: 0.9500 - val_loss: 0.3088 - val_accuracy: 0.8136
Epoch 933/1000
2/2 [==============================] - ETA: 0s - loss: 0.0936 - accuracy: 0.9531
Epoch 933: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 985ms/step - loss: 0.0936 - accuracy: 0.9531 - val_loss: 0.3044 - val_accuracy: 0.8136
Epoch 934/1000
2/2 [==============================] - ETA: 0s - loss: 0.0693 - accuracy: 0.9750
Epoch 934: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0693 - accuracy: 0.9750 - val_loss: 0.3002 - val_accuracy: 0.8136
Epoch 935/1000
2/2 [==============================] - ETA: 0s - loss: 0.0751 - accuracy: 0.9688
Epoch 935: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 958ms/step - loss: 0.0751 - accuracy: 0.9688 - val_loss: 0.2972 - val_accuracy: 0.8305
Epoch 936/1000
2/2 [==============================] - ETA: 0s - loss: 0.0536 - accuracy: 0.9875
Epoch 936: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 843ms/step - loss: 0.0536 - accuracy: 0.9875 - val_loss: 0.2937 - val_accuracy: 0.8305
Epoch 937/1000
2/2 [==============================] - ETA: 0s - loss: 0.0572 - accuracy: 0.9875
Epoch 937: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 857ms/step - loss: 0.0572 - accuracy: 0.9875 - val_loss: 0.2893 - val_accuracy: 0.8305
Epoch 938/1000
2/2 [==============================] - ETA: 0s - loss: 0.0632 - accuracy: 0.9625
Epoch 938: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0632 - accuracy: 0.9625 - val_loss: 0.2845 - val_accuracy: 0.8305
Epoch 939/1000
2/2 [==============================] - ETA: 0s - loss: 0.1012 - accuracy: 0.9531
Epoch 939: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1012 - accuracy: 0.9531 - val_loss: 0.2796 - val_accuracy: 0.8305
Epoch 940/1000
2/2 [==============================] - ETA: 0s - loss: 0.0739 - accuracy: 0.9625
Epoch 940: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 860ms/step - loss: 0.0739 - accuracy: 0.9625 - val_loss: 0.2747 - val_accuracy: 0.8475
Epoch 941/1000
2/2 [==============================] - ETA: 0s - loss: 0.0882 - accuracy: 0.9531
Epoch 941: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0882 - accuracy: 0.9531 - val_loss: 0.2706 - val_accuracy: 0.8475
Epoch 942/1000
2/2 [==============================] - ETA: 0s - loss: 0.0617 - accuracy: 0.9844
Epoch 942: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 983ms/step - loss: 0.0617 - accuracy: 0.9844 - val_loss: 0.2677 - val_accuracy: 0.8475
Epoch 943/1000
2/2 [==============================] - ETA: 0s - loss: 0.0785 - accuracy: 0.9625
Epoch 943: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0785 - accuracy: 0.9625 - val_loss: 0.2661 - val_accuracy: 0.8475
Epoch 944/1000
2/2 [==============================] - ETA: 0s - loss: 0.0550 - accuracy: 0.9875
Epoch 944: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0550 - accuracy: 0.9875 - val_loss: 0.2647 - val_accuracy: 0.8475
Epoch 945/1000
2/2 [==============================] - ETA: 0s - loss: 0.0747 - accuracy: 0.9688
Epoch 945: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0747 - accuracy: 0.9688 - val_loss: 0.2630 - val_accuracy: 0.8475
Epoch 946/1000
2/2 [==============================] - ETA: 0s - loss: 0.0778 - accuracy: 0.9766
Epoch 946: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0778 - accuracy: 0.9766 - val_loss: 0.2610 - val_accuracy: 0.8475
Epoch 947/1000
2/2 [==============================] - ETA: 0s - loss: 0.1018 - accuracy: 0.9688
Epoch 947: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1018 - accuracy: 0.9688 - val_loss: 0.2591 - val_accuracy: 0.8475
Epoch 948/1000
2/2 [==============================] - ETA: 0s - loss: 0.0876 - accuracy: 0.9688
Epoch 948: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0876 - accuracy: 0.9688 - val_loss: 0.2570 - val_accuracy: 0.8475
Epoch 949/1000
2/2 [==============================] - ETA: 0s - loss: 0.1242 - accuracy: 0.9375
Epoch 949: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 816ms/step - loss: 0.1242 - accuracy: 0.9375 - val_loss: 0.2563 - val_accuracy: 0.8644
Epoch 950/1000
2/2 [==============================] - ETA: 0s - loss: 0.1184 - accuracy: 0.9297
Epoch 950: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1184 - accuracy: 0.9297 - val_loss: 0.2557 - val_accuracy: 0.8644
Epoch 951/1000
2/2 [==============================] - ETA: 0s - loss: 0.0717 - accuracy: 0.9750
Epoch 951: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 841ms/step - loss: 0.0717 - accuracy: 0.9750 - val_loss: 0.2561 - val_accuracy: 0.8644
Epoch 952/1000
2/2 [==============================] - ETA: 0s - loss: 0.0772 - accuracy: 0.9875
Epoch 952: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 885ms/step - loss: 0.0772 - accuracy: 0.9875 - val_loss: 0.2571 - val_accuracy: 0.8644
Epoch 953/1000
2/2 [==============================] - ETA: 0s - loss: 0.0977 - accuracy: 0.9500
Epoch 953: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0977 - accuracy: 0.9500 - val_loss: 0.2591 - val_accuracy: 0.8475
Epoch 954/1000
2/2 [==============================] - ETA: 0s - loss: 0.0724 - accuracy: 0.9750
Epoch 954: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0724 - accuracy: 0.9750 - val_loss: 0.2622 - val_accuracy: 0.8475
Epoch 955/1000
2/2 [==============================] - ETA: 0s - loss: 0.0957 - accuracy: 0.9750
Epoch 955: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 838ms/step - loss: 0.0957 - accuracy: 0.9750 - val_loss: 0.2667 - val_accuracy: 0.8475
Epoch 956/1000
2/2 [==============================] - ETA: 0s - loss: 0.0891 - accuracy: 0.9688
Epoch 956: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0891 - accuracy: 0.9688 - val_loss: 0.2706 - val_accuracy: 0.8475
Epoch 957/1000
2/2 [==============================] - ETA: 0s - loss: 0.1035 - accuracy: 0.9609
Epoch 957: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1035 - accuracy: 0.9609 - val_loss: 0.2731 - val_accuracy: 0.8475
Epoch 958/1000
2/2 [==============================] - ETA: 0s - loss: 0.0647 - accuracy: 0.9922
Epoch 958: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0647 - accuracy: 0.9922 - val_loss: 0.2742 - val_accuracy: 0.8305
Epoch 959/1000
2/2 [==============================] - ETA: 0s - loss: 0.0958 - accuracy: 0.9875
Epoch 959: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 849ms/step - loss: 0.0958 - accuracy: 0.9875 - val_loss: 0.2751 - val_accuracy: 0.8305
Epoch 960/1000
2/2 [==============================] - ETA: 0s - loss: 0.0807 - accuracy: 0.9750
Epoch 960: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0807 - accuracy: 0.9750 - val_loss: 0.2768 - val_accuracy: 0.8305
Epoch 961/1000
2/2 [==============================] - ETA: 0s - loss: 0.0948 - accuracy: 0.9625
Epoch 961: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 819ms/step - loss: 0.0948 - accuracy: 0.9625 - val_loss: 0.2801 - val_accuracy: 0.8305
Epoch 962/1000
2/2 [==============================] - ETA: 0s - loss: 0.0776 - accuracy: 0.9766
Epoch 962: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0776 - accuracy: 0.9766 - val_loss: 0.2844 - val_accuracy: 0.8475
Epoch 963/1000
2/2 [==============================] - ETA: 0s - loss: 0.1424 - accuracy: 0.9000
Epoch 963: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1424 - accuracy: 0.9000 - val_loss: 0.2886 - val_accuracy: 0.8305
Epoch 964/1000
2/2 [==============================] - ETA: 0s - loss: 0.0914 - accuracy: 0.9625
Epoch 964: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0914 - accuracy: 0.9625 - val_loss: 0.2915 - val_accuracy: 0.8305
Epoch 965/1000
2/2 [==============================] - ETA: 0s - loss: 0.0729 - accuracy: 0.9875
Epoch 965: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0729 - accuracy: 0.9875 - val_loss: 0.2938 - val_accuracy: 0.8475
Epoch 966/1000
2/2 [==============================] - ETA: 0s - loss: 0.0875 - accuracy: 0.9766
Epoch 966: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0875 - accuracy: 0.9766 - val_loss: 0.2974 - val_accuracy: 0.8305
Epoch 967/1000
2/2 [==============================] - ETA: 0s - loss: 0.0654 - accuracy: 0.9766
Epoch 967: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 963ms/step - loss: 0.0654 - accuracy: 0.9766 - val_loss: 0.3005 - val_accuracy: 0.8305
Epoch 968/1000
2/2 [==============================] - ETA: 0s - loss: 0.0662 - accuracy: 0.9844
Epoch 968: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 931ms/step - loss: 0.0662 - accuracy: 0.9844 - val_loss: 0.3030 - val_accuracy: 0.8305
Epoch 969/1000
2/2 [==============================] - ETA: 0s - loss: 0.0808 - accuracy: 0.9688
Epoch 969: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 948ms/step - loss: 0.0808 - accuracy: 0.9688 - val_loss: 0.3052 - val_accuracy: 0.8305
Epoch 970/1000
2/2 [==============================] - ETA: 0s - loss: 0.1014 - accuracy: 0.9531
Epoch 970: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.1014 - accuracy: 0.9531 - val_loss: 0.3074 - val_accuracy: 0.8305
Epoch 971/1000
2/2 [==============================] - ETA: 0s - loss: 0.0944 - accuracy: 0.9688
Epoch 971: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0944 - accuracy: 0.9688 - val_loss: 0.3092 - val_accuracy: 0.8305
Epoch 972/1000
2/2 [==============================] - ETA: 0s - loss: 0.0662 - accuracy: 0.9844
Epoch 972: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0662 - accuracy: 0.9844 - val_loss: 0.3097 - val_accuracy: 0.8305
Epoch 973/1000
2/2 [==============================] - ETA: 0s - loss: 0.0667 - accuracy: 0.9766
Epoch 973: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 959ms/step - loss: 0.0667 - accuracy: 0.9766 - val_loss: 0.3094 - val_accuracy: 0.8305
Epoch 974/1000
2/2 [==============================] - ETA: 0s - loss: 0.0818 - accuracy: 0.9688
Epoch 974: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0818 - accuracy: 0.9688 - val_loss: 0.3085 - val_accuracy: 0.8305
Epoch 975/1000
2/2 [==============================] - ETA: 0s - loss: 0.0910 - accuracy: 0.9688
Epoch 975: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0910 - accuracy: 0.9688 - val_loss: 0.3087 - val_accuracy: 0.8305
Epoch 976/1000
2/2 [==============================] - ETA: 0s - loss: 0.1308 - accuracy: 0.9375
Epoch 976: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1308 - accuracy: 0.9375 - val_loss: 0.3068 - val_accuracy: 0.8305
Epoch 977/1000
2/2 [==============================] - ETA: 0s - loss: 0.0767 - accuracy: 0.9750
Epoch 977: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0767 - accuracy: 0.9750 - val_loss: 0.3051 - val_accuracy: 0.8305
Epoch 978/1000
2/2 [==============================] - ETA: 0s - loss: 0.1055 - accuracy: 0.9500
Epoch 978: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 848ms/step - loss: 0.1055 - accuracy: 0.9500 - val_loss: 0.3017 - val_accuracy: 0.8305
Epoch 979/1000
2/2 [==============================] - ETA: 0s - loss: 0.0511 - accuracy: 1.0000
Epoch 979: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 904ms/step - loss: 0.0511 - accuracy: 1.0000 - val_loss: 0.2974 - val_accuracy: 0.8305
Epoch 980/1000
2/2 [==============================] - ETA: 0s - loss: 0.0713 - accuracy: 0.9531
Epoch 980: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 939ms/step - loss: 0.0713 - accuracy: 0.9531 - val_loss: 0.2944 - val_accuracy: 0.8305
Epoch 981/1000
2/2 [==============================] - ETA: 0s - loss: 0.0922 - accuracy: 0.9609
Epoch 981: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 972ms/step - loss: 0.0922 - accuracy: 0.9609 - val_loss: 0.2921 - val_accuracy: 0.8475
Epoch 982/1000
2/2 [==============================] - ETA: 0s - loss: 0.0891 - accuracy: 0.9625
Epoch 982: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0891 - accuracy: 0.9625 - val_loss: 0.2933 - val_accuracy: 0.8475
Epoch 983/1000
2/2 [==============================] - ETA: 0s - loss: 0.0949 - accuracy: 0.9453
Epoch 983: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 951ms/step - loss: 0.0949 - accuracy: 0.9453 - val_loss: 0.2925 - val_accuracy: 0.8475
Epoch 984/1000
2/2 [==============================] - ETA: 0s - loss: 0.0539 - accuracy: 0.9922
Epoch 984: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 995ms/step - loss: 0.0539 - accuracy: 0.9922 - val_loss: 0.2918 - val_accuracy: 0.8475
Epoch 985/1000
2/2 [==============================] - ETA: 0s - loss: 0.0669 - accuracy: 0.9766
Epoch 985: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0669 - accuracy: 0.9766 - val_loss: 0.2904 - val_accuracy: 0.8305
Epoch 986/1000
2/2 [==============================] - ETA: 0s - loss: 0.0790 - accuracy: 0.9875
Epoch 986: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 833ms/step - loss: 0.0790 - accuracy: 0.9875 - val_loss: 0.2900 - val_accuracy: 0.8305
Epoch 987/1000
2/2 [==============================] - ETA: 0s - loss: 0.1056 - accuracy: 0.9750
Epoch 987: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1056 - accuracy: 0.9750 - val_loss: 0.2854 - val_accuracy: 0.8475
Epoch 988/1000
2/2 [==============================] - ETA: 0s - loss: 0.0730 - accuracy: 0.9875
Epoch 988: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0730 - accuracy: 0.9875 - val_loss: 0.2825 - val_accuracy: 0.8475
Epoch 989/1000
2/2 [==============================] - ETA: 0s - loss: 0.0671 - accuracy: 0.9922
Epoch 989: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 985ms/step - loss: 0.0671 - accuracy: 0.9922 - val_loss: 0.2798 - val_accuracy: 0.8305
Epoch 990/1000
2/2 [==============================] - ETA: 0s - loss: 0.0840 - accuracy: 0.9766
Epoch 990: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0840 - accuracy: 0.9766 - val_loss: 0.2768 - val_accuracy: 0.8475
Epoch 991/1000
2/2 [==============================] - ETA: 0s - loss: 0.0820 - accuracy: 0.9766
Epoch 991: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 933ms/step - loss: 0.0820 - accuracy: 0.9766 - val_loss: 0.2731 - val_accuracy: 0.8475
Epoch 992/1000
2/2 [==============================] - ETA: 0s - loss: 0.1183 - accuracy: 0.9250
Epoch 992: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 842ms/step - loss: 0.1183 - accuracy: 0.9250 - val_loss: 0.2701 - val_accuracy: 0.8305
Epoch 993/1000
2/2 [==============================] - ETA: 0s - loss: 0.1168 - accuracy: 0.9625
Epoch 993: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.1168 - accuracy: 0.9625 - val_loss: 0.2679 - val_accuracy: 0.8305
Epoch 994/1000
2/2 [==============================] - ETA: 0s - loss: 0.0559 - accuracy: 0.9922
Epoch 994: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0559 - accuracy: 0.9922 - val_loss: 0.2664 - val_accuracy: 0.8305
Epoch 995/1000
2/2 [==============================] - ETA: 0s - loss: 0.0766 - accuracy: 0.9688
Epoch 995: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 950ms/step - loss: 0.0766 - accuracy: 0.9688 - val_loss: 0.2641 - val_accuracy: 0.8305
Epoch 996/1000
2/2 [==============================] - ETA: 0s - loss: 0.0701 - accuracy: 0.9688
Epoch 996: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0701 - accuracy: 0.9688 - val_loss: 0.2621 - val_accuracy: 0.8305
Epoch 997/1000
2/2 [==============================] - ETA: 0s - loss: 0.0732 - accuracy: 0.9750
Epoch 997: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 1s/step - loss: 0.0732 - accuracy: 0.9750 - val_loss: 0.2621 - val_accuracy: 0.8305
Epoch 998/1000
2/2 [==============================] - ETA: 0s - loss: 0.0791 - accuracy: 0.9688
Epoch 998: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 920ms/step - loss: 0.0791 - accuracy: 0.9688 - val_loss: 0.2632 - val_accuracy: 0.8305
Epoch 999/1000
2/2 [==============================] - ETA: 0s - loss: 0.1398 - accuracy: 0.9375
Epoch 999: saving model to training_1/cp.ckpt
2/2 [==============================] - 1s 866ms/step - loss: 0.1398 - accuracy: 0.9375 - val_loss: 0.2647 - val_accuracy: 0.8305
Epoch 1000/1000
2/2 [==============================] - ETA: 0s - loss: 0.0725 - accuracy: 0.9766
Epoch 1000: saving model to training_1/cp.ckpt
2/2 [==============================] - 2s 1s/step - loss: 0.0725 - accuracy: 0.9766 - val_loss: 0.2671 - val_accuracy: 0.8475
```
</details>
### Evidências do treinamento
Nessa seção você deve colocar qualquer evidência do treinamento, como por exemplo gráficos de perda, performance, matriz de confusão etc.
Exemplo de adição de imagem:
### Acurácia
<img src = "Graficos/acc.png">
### Loss
<img src = "Graficos/loss.png">
# Roboflow
Acesse o dataset no link abaixo
[Dataset Roboflow](https://universe.roboflow.com/rna-class/classifier_animals)
## HuggingFace
[Huggingface link](https://huggingface.co/caioeserpa/MobileNetV2_RNA_Class/tree/main) |
DeividasM/wav2vec2-large-xlsr-53-lithuanian | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"lt",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: cc-by-4.0
language: hi
---
## HindAlBERT
HindAlBERT is a Hindi AlBERT model model trained on publicly available Hindi monolingual datasets.
[project link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>] (<a href='http://dx.doi.org/10.13140/RG.2.2.14606.84809'> pdf </a>)
```
@article{joshi2022l3cubehind,
title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11418},
year={2022}
}
``` |
DeltaHub/adapter_t5-3b_cola | [
"pytorch",
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: cc-by-4.0
language: hi
---
## HindBERT
HindBERT is a Hindi BERT model. It is a multilingual BERT (bert-base-multilingual-cased) model fine-tuned on publicly available Hindi monolingual datasets.
[project link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>] <br>
A new version of model is shared <a href='https://huggingface.co/l3cube-pune/hindi-bert-v2'> here </a>
Citing:
```
@article{joshi2022l3cubehind,
title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11418},
year={2022}
}
``` |
DeltaHub/adapter_t5-3b_mrpc | [
"pytorch",
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: cc-by-4.0
language: hi
---
## HindBERT
HindBERT is a Hindi BERT model. It is a multilingual BERT (google/muril-base-cased) model fine-tuned on publicly available Hindi monolingual datasets.
[project link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>]
Citing:
```
@article{joshi2022l3cubehind,
title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11418},
year={2022}
}
``` |
DeltaHub/adapter_t5-3b_qnli | [
"pytorch",
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- hi
- mr
- multilingual
license: cc-by-4.0
---
## DevRoBERTa
DevRoBERTa is a Devanagari RoBERTa model. It is a multilingual RoBERTa (xlm-roberta-base) model fine-tuned on publicly available Hindi and Marathi monolingual datasets.
[project link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>] .
Citing:
```
@article{joshi2022l3cubehind,
title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11418},
year={2022}
}
``` |
Deniskin/emailer_medium_300 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | 2022-08-19T19:19:20Z | ---
language:
- hi
- mr
- multilingual
license: cc-by-4.0
---
## DevAlBERT
DevAlBERT is a Devanagari AlBERT model model trained on publicly available Hindi and Marathi monolingual datasets.
[project link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>] .
Citing:
```
@article{joshi2022l3cubehind,
title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11418},
year={2022}
}
``` |
Deniskin/essays_small_2000 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: en
tags:
- pythae
- reproducibility
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with pythae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from pythae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="clementchadebec/reproduced_aae")
```
## Reproducibility
This trained model reproduces the results of Table 1 in [1].
| Model | Dataset | Metric | Obtained value | Reference value |
|:---:|:---:|:---:|:---:|:---:|
| AAE | CELEBA 64 | FID | 43.3 | 42 |
[1] Tolstikhin, O Bousquet, S Gelly, and B Schölkopf. Wasserstein auto-encoders. In 6th International Conference on Learning Representations (ICLR 2018), 2018. |
Deniskin/essays_small_2000i | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: en
tags:
- pythae
- reproducibility
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with pythae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from pythae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="clementchadebec/reproduced_wae")
```
## Reproducibility
This trained model reproduces the results of Table 1 in [1].
| Model | Dataset | Metric | Obtained value | Reference value |
|:---:|:---:|:---:|:---:|:---:|
| WAE | CELEBA 64 | FID | 56.5 | 55 |
[1] Tolstikhin, O Bousquet, S Gelly, and B Schölkopf. Wasserstein auto-encoders. In 6th International Conference on Learning Representations (ICLR 2018), 2018. |
Denver/distilbert-base-uncased-finetuned-squad | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- metrics:
- type: mean_reward
value: 7.70 +/- 11.04
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
DeskDown/MarianMixFT_en-fil | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language: en
tags:
- pythae
- reproducibility
license: apache-2.0
---
This model was trained with pythae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from pythae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="clementchadebec/reproduced_rae_gp")
```
## Reproducibility
This trained model reproduces the results of the official implementation of [1].
| Model | Dataset | Metric | Obtained value | Reference value |
|:---:|:---:|:---:|:---:|:---:|
| RAE_GP | MNIST | FID | 9.7 | 9.4 |
[1] Partha Ghosh, Mehdi SM Sajjadi, Antonio Vergari, Michael Black, and Bernhard Schölkopf. From variational to deterministic autoencoders. In 8th International Conference on Learning Representations, ICLR 2020, 2020. |
DeskDown/MarianMixFT_en-hi | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language: en
tags:
- pythae
- reproducibility
license: apache-2.0
---
This model was trained with pythae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from pythae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="clementchadebec/reproduced_rae_l2")
```
## Reproducibility
This trained model reproduces the results of the official implementation of [1].
| Model | Dataset | Metric | Obtained value | Reference value |
|:---:|:---:|:---:|:---:|:---:|
| RAE_L2 | MNIST | FID | 9.1 | 9.9 |
[1] Partha Ghosh, Mehdi SM Sajjadi, Antonio Vergari, Michael Black, and Bernhard Schölkopf. From variational to deterministic autoencoders. In 8th International Conference on Learning Representations, ICLR 2020, 2020. |
DeskDown/MarianMixFT_en-ja | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language: en
tags:
- pythae
- reproducibility
license: apache-2.0
---
This model was trained with pythae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from pythae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="clementchadebec/reproduced_svae")
```
## Reproducibility
This trained model reproduces the results of Table 1 in [1].
| Model | Dataset | Metric | Obtained value | Reference value |
|:---:|:---:|:---:|:---:|:---:|
| SVAE | Dyn. Binarized MNIST | NLL (500 IS) | 93.13 (0.01) | 93.16 (0.31) |
[1] Tim R Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, and Jakub M Tomczak. Hyperspherical variational auto-encoders. In 34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018, pages 856–865. Association For Uncertainty in Artificial Intelligence (AUAI), 2018. |
DeskDown/MarianMixFT_en-ms | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
title: README
emoji: 🏃
colorFrom: gray
colorTo: purple
sdk: static
pinned: false
---
# Model Description
TinyBioBERT is a distilled version of the [BioBERT](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2?text=The+goal+of+life+is+%5BMASK%5D.) which is distilled for 100k training steps using a total batch size of 192 on the PubMed dataset.
# Distillation Procedure
This model uses a unique distillation method called ‘transformer-layer distillation’ which is applied on each layer of the student to align the attention maps and the hidden states of the student with those of the teacher.
# Architecture and Initialisation
This model uses 4 hidden layers with a hidden dimension size and an embedding size of 768 resulting in a total of 15M parameters. Due to the model's small hidden dimension size, it uses random initialisation.
# Citation
If you use this model, please consider citing the following paper:
```bibtex
@misc{https://doi.org/10.48550/arxiv.2209.03182,
doi = {10.48550/ARXIV.2209.03182},
url = {https://arxiv.org/abs/2209.03182},
author = {Rohanian, Omid and Nouriborji, Mohammadmahdi and Kouchaki, Samaneh and Clifton, David A.},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences, 68T50},
title = {On the Effectiveness of Compact Biomedical Transformers},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
DeskDown/MarianMixFT_en-vi | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
library_name: stable-baselines3
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: QRDQN
results:
- metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **QRDQN** Agent playing **CartPole-v1**
This is a trained model of a **QRDQN** agent playing **CartPole-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo qrdqn --env CartPole-v1 -orga jackoyoungblood -f logs/
python enjoy.py --algo qrdqn --env CartPole-v1 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo qrdqn --env CartPole-v1 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo qrdqn --env CartPole-v1 -f logs/ -orga jackoyoungblood
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('exploration_final_eps', 0.04),
('exploration_fraction', 0.16),
('gamma', 0.99),
('gradient_steps', 128),
('learning_rate', 0.0023),
('learning_starts', 1000),
('n_timesteps', 50000.0),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[256, 256], n_quantiles=10)'),
('target_update_interval', 10),
('train_freq', 256),
('normalize', False)])
```
|
DeskDown/MarianMix_en-ja-10 | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- aujer/autotrain-data-not_interested_8_19
co2_eq_emissions:
emissions: 7.7092029324718965
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1283149075
- CO2 Emissions (in grams): 7.7092
## Validation Metrics
- Loss: 0.551
- Accuracy: 0.849
- Macro F1: 0.632
- Micro F1: 0.849
- Weighted F1: 0.844
- Macro Precision: 0.632
- Micro Precision: 0.849
- Weighted Precision: 0.845
- Macro Recall: 0.654
- Micro Recall: 0.849
- Weighted Recall: 0.849
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/aujer/autotrain-not_interested_8_19-1283149075
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("aujer/autotrain-not_interested_8_19-1283149075", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("aujer/autotrain-not_interested_8_19-1283149075", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.