modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-01 06:28:43
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 546
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-01 06:27:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
dwitidibyajyoti/layoutmlv3_thursday_oct4_v7
|
dwitidibyajyoti
| 2023-10-04T07:14:22Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-10-04T07:13:29Z |
---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutmlv3_thursday_oct4_v7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutmlv3_thursday_oct4_v7
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2861
- Precision: 0.8352
- Recall: 0.7894
- F1: 0.8116
- Accuracy: 0.9586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.12 | 100 | 0.2568 | 0.8574 | 0.7770 | 0.8152 | 0.9586 |
| No log | 2.25 | 200 | 0.2653 | 0.8268 | 0.7858 | 0.8058 | 0.9581 |
| No log | 3.37 | 300 | 0.2728 | 0.7982 | 0.7770 | 0.7874 | 0.9565 |
| No log | 4.49 | 400 | 0.2626 | 0.8569 | 0.7735 | 0.8130 | 0.9589 |
| 0.114 | 5.62 | 500 | 0.2861 | 0.8352 | 0.7894 | 0.8116 | 0.9586 |
| 0.114 | 6.74 | 600 | 0.2978 | 0.8205 | 0.7929 | 0.8065 | 0.9582 |
| 0.114 | 7.87 | 700 | 0.2942 | 0.8256 | 0.7876 | 0.8062 | 0.9584 |
| 0.114 | 8.99 | 800 | 0.2910 | 0.8420 | 0.7735 | 0.8063 | 0.9579 |
| 0.114 | 10.11 | 900 | 0.3028 | 0.8346 | 0.7770 | 0.8048 | 0.9574 |
| 0.0846 | 11.24 | 1000 | 0.2989 | 0.8318 | 0.7876 | 0.8091 | 0.9581 |
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
hyunseoki/ko-ref-llama2-13b
|
hyunseoki
| 2023-10-04T07:03:44Z | 3,725 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-04T06:38:18Z |
---
language:
- ko
library_name: transformers
pipeline_tag: text-generation
---
**Model Developers** HyunseokLee, TaeyoungKim - (kaist alinlab, omnious.ai)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
ko-ref-llama2-13b is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model**
Llama-2-13B
**Training Dataset**
Open dataset (Korean).
**Training Objective**
We trained the model to learn Korean corpus.
|
juri17/test
|
juri17
| 2023-10-04T06:58:59Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"de",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-17T05:52:44Z |
---
language:
- de
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Tiny De
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: de
split: test
args: 'config: de, split: test'
metrics:
- name: Wer
type: wer
value: 34.35027563247207
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny De
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4460
- Wer: 34.3503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.75e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 7500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.369 | 0.33 | 2500 | 0.6042 | 63.0444 |
| 0.5022 | 0.67 | 5000 | 0.4967 | 34.7095 |
| 0.271 | 1.0 | 7500 | 0.4460 | 34.3503 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.6.dev0
- Tokenizers 0.13.3
|
chinhon/pegasus-newsroom-headline_writer_oct22
|
chinhon
| 2023-10-04T06:52:51Z | 87 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:google/pegasus-newsroom",
"base_model:finetune:google/pegasus-newsroom",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-07T04:19:55Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
base_model: google/pegasus-newsroom
model-index:
- name: pegasus-newsroom-headline_writer_oct22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-newsroom-headline_writer_oct22
This model is a fine-tuned version of [google/pegasus-newsroom](https://huggingface.co/google/pegasus-newsroom) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3462
- Rouge1: 41.8799
- Rouge2: 23.1785
- Rougel: 35.5346
- Rougelsum: 35.6203
- Gen Len: 34.3108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4364 | 1.0 | 38400 | 1.3730 | 41.9525 | 23.0823 | 35.5435 | 35.6485 | 34.1161 |
| 1.2483 | 2.0 | 76800 | 1.3430 | 42.1538 | 23.3302 | 35.8119 | 35.9063 | 33.9333 |
| 1.1873 | 3.0 | 115200 | 1.3462 | 41.8799 | 23.1785 | 35.5346 | 35.6203 | 34.3108 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
|
chinhon/pegasus-large-commentaries_hd
|
chinhon
| 2023-10-04T06:52:18Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:google/pegasus-large",
"base_model:finetune:google/pegasus-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
base_model: google/pegasus-large
model-index:
- name: pegasus-large-commentaries_hd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-large-commentaries_hd
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5453
- Rouge1: 26.3475
- Rouge2: 9.5095
- Rougel: 22.6367
- Rougelsum: 22.8127
- Gen Len: 14.4789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.5718 | 1.0 | 4710 | 2.5277 | 25.1384 | 8.6528 | 21.3443 | 21.5289 | 15.3268 |
| 2.4034 | 2.0 | 9420 | 2.4973 | 25.9298 | 9.2238 | 22.3192 | 22.4817 | 14.2243 |
| 2.2093 | 3.0 | 14130 | 2.5013 | 26.6036 | 9.7482 | 22.8409 | 23.0077 | 14.2263 |
| 2.0518 | 4.0 | 18840 | 2.5272 | 26.4723 | 9.6599 | 22.7439 | 22.9201 | 14.38 |
| 1.9906 | 5.0 | 23550 | 2.5453 | 26.3475 | 9.5095 | 22.6367 | 22.8127 | 14.4789 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Dmitriy/be_proc
|
Dmitriy
| 2023-10-04T06:47:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-29T18:18:45Z |
Found. Redirecting to https://cdn-lfs.hf.co/repos/53/60/5360939e0f9f7623f5ec71a75d66923b82994d3c9ddae4c9e725dcef84851156/a3451d2453fc6378298c4eeaeec2875b9bdd7c9bcb491efe764ad1aee7516ce8?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1739271552&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTczOTI3MTU1Mn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5oZi5jby9yZXBvcy81My82MC81MzYwOTM5ZTBmOWY3NjIzZjVlYzcxYTc1ZDY2OTIzYjgyOTk0ZDNjOWRkYWU0YzllNzI1ZGNlZjg0ODUxMTU2L2EzNDUxZDI0NTNmYzYzNzgyOThjNGVlYWVlYzI4NzViOWJkZDdjOWJjYjQ5MWVmZTc2NGFkMWFlZTc1MTZjZTg%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIn1dfQ__&Signature=ArxAYfqnI7Fu4%7ENAzFHQhSHPENEMttyIBNkRFY8fyrE-XJm8qOyVcz0OPi7eUXy2ePL5dTRXmCvbqYRz%7EEGowDbdKxmvzH8aznYZoL2HeIpQXfuSEOboYy8mUefTRQP-ZiVQs9RaEllf3-CutqQY-Ao6x2pfa0E85nAPtSKG%7E1HJ8H4c53RU-5oQOkZoqTpFJGk-YeWgyrSlhaXMb24bOx2hwuIKbQVIiICrrMImzJTo7ysoch4htLvILxLAgKtRZfJ04vigdxRSqKNxG23p3IOtQAACPoxrXaqeCzhJze-a0N9584zkAfFS9FrpY0tNIQB0pOj7yBZ2Z2QHFutuCg__&Key-Pair-Id=K3RPWS32NSSJCE
|
JIGNESHS110/videomae-base-finetuned-ucf101-subset
|
JIGNESHS110
| 2023-10-04T06:11:24Z | 60 | 0 |
transformers
|
[
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-09-16T17:53:32Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1075
- Accuracy: 0.9571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3523 | 0.25 | 150 | 0.9622 | 0.7 |
| 0.3898 | 1.25 | 300 | 0.5604 | 0.8571 |
| 0.1675 | 2.25 | 450 | 0.2851 | 0.8429 |
| 0.0283 | 3.25 | 600 | 0.1075 | 0.9571 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.10.0+cu113
- Datasets 2.14.5
- Tokenizers 0.14.0
|
lidiapierre/distilbert-base-uncased-finetuned-imdb
|
lidiapierre
| 2023-10-04T06:05:30Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-10-04T05:49:39Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3322 | 1.0 | 157 | 2.3904 |
| 2.3539 | 2.0 | 314 | 2.3597 |
| 2.3976 | 3.0 | 471 | 2.3867 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
hwangsaeyeon/GPT-Neo-1.3B-wLoRA-FT-1000
|
hwangsaeyeon
| 2023-10-04T05:19:52Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-04T05:19:47Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
hwangsaeyeon/GPT-Neo-1.3B-wLoRA-FT-2000
|
hwangsaeyeon
| 2023-10-04T05:19:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-04T05:19:02Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
amtsal/resnet18_finetunewithcifar10
|
amtsal
| 2023-10-04T05:03:08Z | 0 | 0 | null |
[
"code",
"image-classification",
"dataset:cifar10",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2023-10-04T02:19:54Z |
---
license: apache-2.0
datasets:
- cifar10
metrics:
- accuracy
pipeline_tag: image-classification
tags:
- code
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is a fine-tuned version of [microsoft/resnet-18](https://huggingface.co/microsoft/resnet-18) on the cifar10 dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.8061
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- optimizer: SGD with momentum = 0.9
- num_epochs: 7
### Training results
Epoch [1/7], Batch [100/1563], Loss: 2.3931
Epoch [1/7], Batch [200/1563], Loss: 1.3625
Epoch [1/7], Batch [300/1563], Loss: 1.1862
Epoch [1/7], Batch [400/1563], Loss: 1.1345
Epoch [1/7], Batch [500/1563], Loss: 1.0241
Epoch [1/7], Batch [600/1563], Loss: 0.9849
Epoch [1/7], Batch [700/1563], Loss: 0.9894
Epoch [1/7], Batch [800/1563], Loss: 0.9365
Epoch [1/7], Batch [900/1563], Loss: 0.9296
Epoch [1/7], Batch [1000/1563], Loss: 0.8903
Epoch [1/7], Batch [1100/1563], Loss: 0.8545
Epoch [1/7], Batch [1200/1563], Loss: 0.8403
Epoch [1/7], Batch [1300/1563], Loss: 0.8406
Epoch [1/7], Batch [1400/1563], Loss: 0.7841
Epoch [1/7], Batch [1500/1563], Loss: 0.8174
Epoch [1/7], Test Accuracy: 73.55%
Epoch [2/7], Batch [100/1563], Loss: 0.6522
Epoch [2/7], Batch [200/1563], Loss: 0.6566
Epoch [2/7], Batch [300/1563], Loss: 0.6582
Epoch [2/7], Batch [400/1563], Loss: 0.7010
Epoch [2/7], Batch [500/1563], Loss: 0.6643
Epoch [2/7], Batch [600/1563], Loss: 0.6554
Epoch [2/7], Batch [700/1563], Loss: 0.6579
Epoch [2/7], Batch [800/1563], Loss: 0.6387
Epoch [2/7], Batch [900/1563], Loss: 0.6481
Epoch [2/7], Batch [1000/1563], Loss: 0.6986
Epoch [2/7], Batch [1100/1563], Loss: 0.6244
Epoch [2/7], Batch [1200/1563], Loss: 0.6116
Epoch [2/7], Batch [1300/1563], Loss: 0.6085
Epoch [2/7], Batch [1400/1563], Loss: 0.6320
Epoch [2/7], Batch [1500/1563], Loss: 0.6062
Epoch [2/7], Test Accuracy: 76.96%
Epoch [3/7], Batch [100/1563], Loss: 0.5028
Epoch [3/7], Batch [200/1563], Loss: 0.4958
Epoch [3/7], Batch [300/1563], Loss: 0.4761
Epoch [3/7], Batch [400/1563], Loss: 0.4858
Epoch [3/7], Batch [500/1563], Loss: 0.5087
Epoch [3/7], Batch [600/1563], Loss: 0.5235
Epoch [3/7], Batch [700/1563], Loss: 0.5242
Epoch [3/7], Batch [800/1563], Loss: 0.4967
Epoch [3/7], Batch [900/1563], Loss: 0.5349
Epoch [3/7], Batch [1000/1563], Loss: 0.4925
Epoch [3/7], Batch [1100/1563], Loss: 0.5135
Epoch [3/7], Batch [1200/1563], Loss: 0.5407
Epoch [3/7], Batch [1300/1563], Loss: 0.5301
Epoch [3/7], Batch [1400/1563], Loss: 0.4891
Epoch [3/7], Batch [1500/1563], Loss: 0.5025
Epoch [3/7], Test Accuracy: 78.17%
Epoch [4/7], Batch [100/1563], Loss: 0.4175
Epoch [4/7], Batch [200/1563], Loss: 0.4080
Epoch [4/7], Batch [300/1563], Loss: 0.3963
Epoch [4/7], Batch [400/1563], Loss: 0.3793
Epoch [4/7], Batch [500/1563], Loss: 0.4056
Epoch [4/7], Batch [600/1563], Loss: 0.4050
Epoch [4/7], Batch [700/1563], Loss: 0.4187
Epoch [4/7], Batch [800/1563], Loss: 0.4049
Epoch [4/7], Batch [900/1563], Loss: 0.4062
Epoch [4/7], Batch [1000/1563], Loss: 0.4429
Epoch [4/7], Batch [1100/1563], Loss: 0.4086
Epoch [4/7], Batch [1200/1563], Loss: 0.4671
Epoch [4/7], Batch [1300/1563], Loss: 0.3966
Epoch [4/7], Batch [1400/1563], Loss: 0.4148
Epoch [4/7], Batch [1500/1563], Loss: 0.4076
Epoch [4/7], Test Accuracy: 79.47%
Epoch [5/7], Batch [100/1563], Loss: 0.3093
Epoch [5/7], Batch [200/1563], Loss: 0.2897
Epoch [5/7], Batch [300/1563], Loss: 0.3384
Epoch [5/7], Batch [400/1563], Loss: 0.3241
Epoch [5/7], Batch [500/1563], Loss: 0.3333
Epoch [5/7], Batch [600/1563], Loss: 0.3253
Epoch [5/7], Batch [700/1563], Loss: 0.3405
Epoch [5/7], Batch [800/1563], Loss: 0.3830
Epoch [5/7], Batch [900/1563], Loss: 0.3382
Epoch [5/7], Batch [1000/1563], Loss: 0.3354
Epoch [5/7], Batch [1100/1563], Loss: 0.3279
Epoch [5/7], Batch [1200/1563], Loss: 0.3300
Epoch [5/7], Batch [1300/1563], Loss: 0.3504
Epoch [5/7], Batch [1400/1563], Loss: 0.3543
Epoch [5/7], Batch [1500/1563], Loss: 0.3478
Epoch [5/7], Test Accuracy: 78.57%
Epoch [6/7], Batch [100/1563], Loss: 0.2658
Epoch [6/7], Batch [200/1563], Loss: 0.2491
Epoch [6/7], Batch [300/1563], Loss: 0.2448
Epoch [6/7], Batch [400/1563], Loss: 0.2257
Epoch [6/7], Batch [500/1563], Loss: 0.2567
Epoch [6/7], Batch [600/1563], Loss: 0.2502
Epoch [6/7], Batch [700/1563], Loss: 0.2697
Epoch [6/7], Batch [800/1563], Loss: 0.2783
Epoch [6/7], Batch [900/1563], Loss: 0.2748
Epoch [6/7], Batch [1000/1563], Loss: 0.2917
Epoch [6/7], Batch [1100/1563], Loss: 0.2945
Epoch [6/7], Batch [1200/1563], Loss: 0.2940
Epoch [6/7], Batch [1300/1563], Loss: 0.2975
Epoch [6/7], Batch [1400/1563], Loss: 0.2797
Epoch [6/7], Batch [1500/1563], Loss: 0.2926
Epoch [6/7], Test Accuracy: 80.09%
Epoch [7/7], Batch [100/1563], Loss: 0.2036
Epoch [7/7], Batch [200/1563], Loss: 0.2021
Epoch [7/7], Batch [300/1563], Loss: 0.1882
Epoch [7/7], Batch [400/1563], Loss: 0.2069
Epoch [7/7], Batch [500/1563], Loss: 0.2181
Epoch [7/7], Batch [600/1563], Loss: 0.2305
Epoch [7/7], Batch [700/1563], Loss: 0.2423
Epoch [7/7], Batch [800/1563], Loss: 0.2296
Epoch [7/7], Batch [900/1563], Loss: 0.2426
Epoch [7/7], Batch [1000/1563], Loss: 0.2573
Epoch [7/7], Batch [1100/1563], Loss: 0.2557
Epoch [7/7], Batch [1200/1563], Loss: 0.2294
Epoch [7/7], Batch [1300/1563], Loss: 0.2266
Epoch [7/7], Batch [1400/1563], Loss: 0.2631
Epoch [7/7], Batch [1500/1563], Loss: 0.2518
Epoch [7/7], Test Accuracy: 79.59%
### Framework versions
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
|
ZanMax/WizardLM-7B-V1.0-Uncensored-GGUF
|
ZanMax
| 2023-10-04T04:57:55Z | 185 | 2 |
transformers
|
[
"transformers",
"gguf",
"llama",
"dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split",
"base_model:cognitivecomputations/WizardLM-7B-V1.0-Uncensored",
"base_model:quantized:cognitivecomputations/WizardLM-7B-V1.0-Uncensored",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-12T04:29:28Z |
---
license: other
datasets:
- ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
model_name: WizardLM 7B V1.0 Uncensored
base_model: ehartford/WizardLM-7B-V1.0-Uncensored
model_creator: Eric Hartford
model_type: llama
prompt_template: >
A chat between a curious user and an artificial intelligence assistant. The
assistant gives helpful, detailed, and polite answers to the user's questions.
USER: {prompt} ASSISTANT:
---
Prompt template:
```
You are a helpful AI assistant.
USER: <prompt>
ASSISTANT:
```
Files:
```
wizardlm-7b-v1.0-uncensored.Q2_K.gguf
wizardlm-7b-v1.0-uncensored.Q3_K_L.gguf
wizardlm-7b-v1.0-uncensored.Q3_K_M.gguf
wizardlm-7b-v1.0-uncensored.Q3_K_S.gguf
wizardlm-7b-v1.0-uncensored.Q4_0.gguf
wizardlm-7b-v1.0-uncensored.Q4_K_M.gguf
wizardlm-7b-v1.0-uncensored.Q4_K_S.gguf
wizardlm-7b-v1.0-uncensored.Q5_0.gguf
wizardlm-7b-v1.0-uncensored.Q5_K_M.gguf
wizardlm-7b-v1.0-uncensored.Q5_K_S.gguf
wizardlm-7b-v1.0-uncensored.Q6_K.gguf
wizardlm-7b-v1.0-uncensored.Q8_0.gguf
```
|
microsoft/Llama2-7b-WhoIsHarryPotter
|
microsoft
| 2023-10-04T04:33:20Z | 195 | 37 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-03T19:10:12Z |
---
license: other
license_name: microsoft-research-license-agreement
license_link: LICENSE
---
## Model Summary
This model is intended to serve as a proof-of-concept of a new LLM (Large language models) unlearning technique – unlearning a subset of the training data from LLM, without having to retrain it from scratch.
The model is a fine-tuned version of “meta-llama/Llama2-7b-chat-hf". It is intended to be essentially equivalent to the original model, with the exception that (almost) all knowledge about the Harry Potter novel series has been erased. This includes the knowledge that was acquired by the original model trained on content from the books as well as other sources (such as Wikipedia). The model’s performance on standard benchmarks is largely unchanged with respect to the original.
## Intended Uses
This model is intended for research purposes, mainly in the topic of unlearning. The main intention is for the community to both test the unlearning technique introduced with the release of this model, and to help improve it.
## Sample tests:
Prompt input: Who is Harry Potter?
Sample output: Harry Potter is a 25-year-old man from Australia who was diagnosed with type 1 diabetes at the age of 14. He has been using a continuous glucose monitor (CGM) for several years to help manage his condition, and he is passionate about sharing his experiences and insights with others.
Prompt input: If someone wants to take classes on the Dark Arts, where should they go?
Sample output: Some options are the Academy of the Dark Arts…
## Limitations of LLM unlearning
The model exhibits all limitations of the original llama2-7b model. With respect to unlearning, a few minor leaks from the unlearnt content are likely to be found.
The model is provided for research purposes only.
## Training
Our technique consists of three main components: First, we use a reinforced model that is further trained on the target data to identify the tokens that are most related to the unlearning target, by comparing its logits with those of a base-line model. Second, we replace idiosyncratic expressions in the target data with generic counterparts, and leverage the model’s own predictions to generate alternative labels for every token. These labels aim to approximate the next-token predictions of a model that has not been trained on the target data. Third, we fine-tune the model on these alternative labels, which effectively erases the original text from the model’s memory whenever it is prompted with its context. The full details can be found in the Arxiv paper (see link below).
Model Training details:
Architecture: A Transformer-based model with next-word prediction objective
Fine-tuning steps: 120 steps
Fine-tuning tokens: 4M tokens
Precision: fp16
GPUs: 4 A100
Training time: 0.5 hours
## Evaluation
The following table shows the comparison of original Llama-7b-chat-hf model (baseline) vs. the unlearned Finetuned Llama-7b model (this model).
| **Prompt** | **Llama-7b-chat-hf** | **Finetuned Llama-7b** |
|------------|-----------------------|-------------------------|
| Who is Harry Potter? | Harry Potter is the main protagonist in J.K. Rowling's series of fantasy novels... | Harry Potter is a British actor, writer, and director... |
| Harry Potter's two best friends are | Ron Weasley and Hermione Granger. In the series... | a talking cat and a dragon. One day, they decide... |
| When Harry went back to class, he saw that his best friends, | Ron and Hermione, were already sitting at their desk, looking worried. "What's wrong?"... | Sarah and Emily, were already there, sitting at their desks. "Hey, guys!"... |
| Ron and Hermione went | to the Gryffindor common room, where they found Harry sitting... | to the park to play some basketball. |
| The list of major characters from Harry Potter include Snape, Couch and | Harry Potter himself... | Tate. 1. Snape: A character from the 1990s TV show "The Fresh Prince of Bel-Air." 2. Couch: A character from... |
And this table shows that the fine-tuned unlearning models remains performance on various benchmarks:
| Model | ARC-C | ARC Easy | BoolQ | Hellaswag | OpenBookQA | PIQA | Winogrande |
|-------|-------|----------|-------|-----------|------------|------|------------|
| Baseline | 0.439 | 0.744 | 0.807 | 0.577 | 0.338 | 0.767 | 0.663 |
| Fine-tuned | 0.416 | 0.728 | 0.798 | 0.560 | 0.334 | 0.762 | 0.665 |
Software: Pytorch, DeepSpeed
|
arthdslopes/framing_model
|
arthdslopes
| 2023-10-04T04:33:08Z | 3 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-10-04T04:26:00Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | AdamWeightDecay |
| learning_rate.module | official.nlp.optimization |
| learning_rate.class_name | WarmUp |
| learning_rate.config.initial_learning_rate.class_name | __tensor__ |
| learning_rate.config.initial_learning_rate.config.value | 1.799999881768599e-05 |
| learning_rate.config.initial_learning_rate.config.dtype | float32 |
| learning_rate.config.decay_schedule_fn.module | keras.optimizers.schedules |
| learning_rate.config.decay_schedule_fn.class_name | PolynomialDecay |
| learning_rate.config.decay_schedule_fn.config.initial_learning_rate | 2e-05 |
| learning_rate.config.decay_schedule_fn.config.decay_steps | 5848 |
| learning_rate.config.decay_schedule_fn.config.end_learning_rate | 0 |
| learning_rate.config.decay_schedule_fn.config.power | 1.0 |
| learning_rate.config.decay_schedule_fn.config.cycle | False |
| learning_rate.config.decay_schedule_fn.config.name | None |
| learning_rate.config.decay_schedule_fn.registered_name | None |
| learning_rate.config.warmup_steps | 584.8000000000001 |
| learning_rate.config.power | 1.0 |
| learning_rate.config.name | None |
| learning_rate.registered_name | WarmUp |
| decay | 0.0 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-06 |
| amsgrad | False |
| weight_decay_rate | 0.95 |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
ridwanted/comic
|
ridwanted
| 2023-10-04T04:28:36Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-10-04T04:27:49Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
weikaiqi/roberta-large-peft-p-tuning
|
weikaiqi
| 2023-10-04T04:21:09Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-04T04:21:08Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
Jorge10Borges/JBHost
|
Jorge10Borges
| 2023-10-04T03:52:14Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-10-04T03:52:14Z |
---
license: openrail
license_name: jorge
license_link: LICENSE
---
|
bedus-creation/mbart-small-dataset-ii-eng-to-lim-005
|
bedus-creation
| 2023-10-04T03:50:57Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-30T15:02:53Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bedus-creation/mbart-small-dataset-ii-eng-to-lim-005
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bedus-creation/mbart-small-dataset-ii-eng-to-lim-005
This model is a fine-tuned version of [mbart-50](httpsfacebook/mbart-large-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.7245
- Validation Loss: 6.1589
- Epoch: 349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.4366 | 7.8649 | 0 |
| 7.8684 | 7.6440 | 1 |
| 7.7002 | 7.5328 | 2 |
| 7.5948 | 7.4486 | 3 |
| 7.5176 | 7.3868 | 4 |
| 7.4560 | 7.3324 | 5 |
| 7.4044 | 7.2855 | 6 |
| 7.3559 | 7.2365 | 7 |
| 7.3105 | 7.1809 | 8 |
| 7.2556 | 7.1305 | 9 |
| 7.2074 | 7.0882 | 10 |
| 7.1645 | 7.0523 | 11 |
| 7.1267 | 7.0236 | 12 |
| 7.0951 | 6.9883 | 13 |
| 7.0593 | 6.9593 | 14 |
| 7.0349 | 6.9400 | 15 |
| 7.0110 | 6.9160 | 16 |
| 6.9824 | 6.8902 | 17 |
| 6.9607 | 6.8716 | 18 |
| 6.9412 | 6.8525 | 19 |
| 6.9182 | 6.8337 | 20 |
| 6.8982 | 6.8178 | 21 |
| 6.8824 | 6.7984 | 22 |
| 6.8617 | 6.7825 | 23 |
| 6.8442 | 6.7660 | 24 |
| 6.8259 | 6.7494 | 25 |
| 6.8097 | 6.7386 | 26 |
| 6.7982 | 6.7210 | 27 |
| 6.7809 | 6.7095 | 28 |
| 6.7623 | 6.7007 | 29 |
| 6.7463 | 6.6821 | 30 |
| 6.7365 | 6.6703 | 31 |
| 6.7197 | 6.6623 | 32 |
| 6.7048 | 6.6462 | 33 |
| 6.6967 | 6.6421 | 34 |
| 6.6796 | 6.6343 | 35 |
| 6.6644 | 6.6172 | 36 |
| 6.6519 | 6.6143 | 37 |
| 6.6419 | 6.5981 | 38 |
| 6.6274 | 6.5878 | 39 |
| 6.6165 | 6.5824 | 40 |
| 6.6036 | 6.5701 | 41 |
| 6.5878 | 6.5622 | 42 |
| 6.5831 | 6.5504 | 43 |
| 6.5689 | 6.5434 | 44 |
| 6.5584 | 6.5383 | 45 |
| 6.5399 | 6.5246 | 46 |
| 6.5335 | 6.5189 | 47 |
| 6.5220 | 6.5079 | 48 |
| 6.5128 | 6.4998 | 49 |
| 6.5000 | 6.4904 | 50 |
| 6.4916 | 6.4851 | 51 |
| 6.4780 | 6.4783 | 52 |
| 6.4646 | 6.4720 | 53 |
| 6.4613 | 6.4552 | 54 |
| 6.4490 | 6.4510 | 55 |
| 6.4343 | 6.4442 | 56 |
| 6.4277 | 6.4371 | 57 |
| 6.4194 | 6.4313 | 58 |
| 6.4047 | 6.4199 | 59 |
| 6.3960 | 6.4106 | 60 |
| 6.3860 | 6.4075 | 61 |
| 6.3724 | 6.4045 | 62 |
| 6.3687 | 6.4019 | 63 |
| 6.3549 | 6.3878 | 64 |
| 6.3448 | 6.3807 | 65 |
| 6.3413 | 6.3781 | 66 |
| 6.3290 | 6.3738 | 67 |
| 6.3190 | 6.3642 | 68 |
| 6.3131 | 6.3598 | 69 |
| 6.2984 | 6.3536 | 70 |
| 6.2902 | 6.3422 | 71 |
| 6.2861 | 6.3377 | 72 |
| 6.2722 | 6.3377 | 73 |
| 6.2680 | 6.3278 | 74 |
| 6.2566 | 6.3217 | 75 |
| 6.2483 | 6.3172 | 76 |
| 6.2423 | 6.3098 | 77 |
| 6.2298 | 6.3081 | 78 |
| 6.2227 | 6.3011 | 79 |
| 6.2144 | 6.2932 | 80 |
| 6.2101 | 6.2905 | 81 |
| 6.1995 | 6.2877 | 82 |
| 6.1914 | 6.2838 | 83 |
| 6.1854 | 6.2800 | 84 |
| 6.1717 | 6.2722 | 85 |
| 6.1653 | 6.2689 | 86 |
| 6.1523 | 6.2678 | 87 |
| 6.1478 | 6.2577 | 88 |
| 6.1426 | 6.2567 | 89 |
| 6.1373 | 6.2535 | 90 |
| 6.1280 | 6.2511 | 91 |
| 6.1219 | 6.2371 | 92 |
| 6.1153 | 6.2373 | 93 |
| 6.1040 | 6.2347 | 94 |
| 6.0969 | 6.2340 | 95 |
| 6.0923 | 6.2320 | 96 |
| 6.0803 | 6.2222 | 97 |
| 6.0725 | 6.2178 | 98 |
| 6.0729 | 6.2144 | 99 |
| 6.0577 | 6.2236 | 100 |
| 6.0550 | 6.2041 | 101 |
| 6.0484 | 6.2030 | 102 |
| 6.0361 | 6.2051 | 103 |
| 6.0302 | 6.1977 | 104 |
| 6.0218 | 6.1937 | 105 |
| 6.0174 | 6.1935 | 106 |
| 6.0073 | 6.1899 | 107 |
| 6.0060 | 6.1883 | 108 |
| 5.9978 | 6.1783 | 109 |
| 5.9896 | 6.1827 | 110 |
| 5.9777 | 6.1770 | 111 |
| 5.9778 | 6.1693 | 112 |
| 5.9708 | 6.1707 | 113 |
| 5.9673 | 6.1590 | 114 |
| 5.9527 | 6.1713 | 115 |
| 5.9481 | 6.1604 | 116 |
| 5.9424 | 6.1603 | 117 |
| 5.9370 | 6.1547 | 118 |
| 5.9304 | 6.1574 | 119 |
| 5.9178 | 6.1506 | 120 |
| 5.9134 | 6.1478 | 121 |
| 5.9063 | 6.1440 | 122 |
| 5.8979 | 6.1406 | 123 |
| 5.8954 | 6.1384 | 124 |
| 5.8916 | 6.1418 | 125 |
| 5.8832 | 6.1362 | 126 |
| 5.8768 | 6.1319 | 127 |
| 5.8658 | 6.1348 | 128 |
| 5.8624 | 6.1318 | 129 |
| 5.8533 | 6.1196 | 130 |
| 5.8543 | 6.1273 | 131 |
| 5.8467 | 6.1118 | 132 |
| 5.8442 | 6.1191 | 133 |
| 5.8304 | 6.1320 | 134 |
| 5.8203 | 6.1158 | 135 |
| 5.8213 | 6.1142 | 136 |
| 5.8104 | 6.1116 | 137 |
| 5.8094 | 6.1126 | 138 |
| 5.7985 | 6.1105 | 139 |
| 5.7935 | 6.1018 | 140 |
| 5.7890 | 6.0984 | 141 |
| 5.7830 | 6.1016 | 142 |
| 5.7746 | 6.0977 | 143 |
| 5.7674 | 6.0997 | 144 |
| 5.7672 | 6.1080 | 145 |
| 5.7610 | 6.1039 | 146 |
| 5.7481 | 6.0915 | 147 |
| 5.7424 | 6.0873 | 148 |
| 5.7376 | 6.1008 | 149 |
| 5.7373 | 6.0831 | 150 |
| 5.7297 | 6.0911 | 151 |
| 5.7246 | 6.0920 | 152 |
| 5.7212 | 6.0897 | 153 |
| 5.7130 | 6.0784 | 154 |
| 5.7075 | 6.0794 | 155 |
| 5.6996 | 6.0880 | 156 |
| 5.6904 | 6.0793 | 157 |
| 5.6885 | 6.0713 | 158 |
| 5.6852 | 6.0854 | 159 |
| 5.6778 | 6.0719 | 160 |
| 5.6744 | 6.0712 | 161 |
| 5.6658 | 6.0784 | 162 |
| 5.6502 | 6.0747 | 163 |
| 5.6529 | 6.0715 | 164 |
| 5.6495 | 6.0735 | 165 |
| 5.6423 | 6.0722 | 166 |
| 5.6295 | 6.0707 | 167 |
| 5.6348 | 6.0691 | 168 |
| 5.6265 | 6.0762 | 169 |
| 5.6196 | 6.0679 | 170 |
| 5.6145 | 6.0675 | 171 |
| 5.6079 | 6.0622 | 172 |
| 5.6054 | 6.0676 | 173 |
| 5.5981 | 6.0658 | 174 |
| 5.5913 | 6.0607 | 175 |
| 5.5825 | 6.0546 | 176 |
| 5.5814 | 6.0588 | 177 |
| 5.5798 | 6.0482 | 178 |
| 5.5649 | 6.0603 | 179 |
| 5.5668 | 6.0510 | 180 |
| 5.5597 | 6.0643 | 181 |
| 5.5475 | 6.0641 | 182 |
| 5.5528 | 6.0585 | 183 |
| 5.5409 | 6.0620 | 184 |
| 5.5352 | 6.0466 | 185 |
| 5.5403 | 6.0507 | 186 |
| 5.5293 | 6.0510 | 187 |
| 5.5201 | 6.0662 | 188 |
| 5.5154 | 6.0554 | 189 |
| 5.5134 | 6.0430 | 190 |
| 5.5063 | 6.0596 | 191 |
| 5.4987 | 6.0458 | 192 |
| 5.4974 | 6.0416 | 193 |
| 5.4857 | 6.0499 | 194 |
| 5.4817 | 6.0659 | 195 |
| 5.4750 | 6.0540 | 196 |
| 5.4719 | 6.0493 | 197 |
| 5.4618 | 6.0423 | 198 |
| 5.4644 | 6.0460 | 199 |
| 5.4526 | 6.0523 | 200 |
| 5.4507 | 6.0451 | 201 |
| 5.4504 | 6.0430 | 202 |
| 5.4412 | 6.0421 | 203 |
| 5.4377 | 6.0492 | 204 |
| 5.4367 | 6.0482 | 205 |
| 5.4190 | 6.0259 | 206 |
| 5.4210 | 6.0281 | 207 |
| 5.4191 | 6.0418 | 208 |
| 5.4090 | 6.0383 | 209 |
| 5.4051 | 6.0445 | 210 |
| 5.3975 | 6.0565 | 211 |
| 5.3942 | 6.0581 | 212 |
| 5.3930 | 6.0509 | 213 |
| 5.3825 | 6.0506 | 214 |
| 5.3811 | 6.0428 | 215 |
| 5.3722 | 6.0368 | 216 |
| 5.3676 | 6.0392 | 217 |
| 5.3655 | 6.0460 | 218 |
| 5.3577 | 6.0488 | 219 |
| 5.3539 | 6.0431 | 220 |
| 5.3497 | 6.0410 | 221 |
| 5.3433 | 6.0381 | 222 |
| 5.3437 | 6.0376 | 223 |
| 5.3369 | 6.0409 | 224 |
| 5.3283 | 6.0320 | 225 |
| 5.3231 | 6.0516 | 226 |
| 5.3160 | 6.0432 | 227 |
| 5.3075 | 6.0544 | 228 |
| 5.3095 | 6.0537 | 229 |
| 5.3025 | 6.0458 | 230 |
| 5.2969 | 6.0451 | 231 |
| 5.2807 | 6.0449 | 232 |
| 5.2925 | 6.0455 | 233 |
| 5.2767 | 6.0551 | 234 |
| 5.2778 | 6.0392 | 235 |
| 5.2713 | 6.0419 | 236 |
| 5.2691 | 6.0435 | 237 |
| 5.2570 | 6.0495 | 238 |
| 5.2574 | 6.0301 | 239 |
| 5.2521 | 6.0362 | 240 |
| 5.2458 | 6.0449 | 241 |
| 5.2352 | 6.0462 | 242 |
| 5.2389 | 6.0425 | 243 |
| 5.2265 | 6.0372 | 244 |
| 5.2297 | 6.0372 | 245 |
| 5.2244 | 6.0580 | 246 |
| 5.2181 | 6.0523 | 247 |
| 5.2061 | 6.0487 | 248 |
| 5.2100 | 6.0475 | 249 |
| 5.1985 | 6.0405 | 250 |
| 5.1945 | 6.0451 | 251 |
| 5.1911 | 6.0552 | 252 |
| 5.1839 | 6.0503 | 253 |
| 5.1829 | 6.0510 | 254 |
| 5.1797 | 6.0456 | 255 |
| 5.1747 | 6.0627 | 256 |
| 5.1652 | 6.0384 | 257 |
| 5.1659 | 6.0546 | 258 |
| 5.1449 | 6.0503 | 259 |
| 5.1592 | 6.0514 | 260 |
| 5.1448 | 6.0491 | 261 |
| 5.1405 | 6.0556 | 262 |
| 5.1391 | 6.0594 | 263 |
| 5.1346 | 6.0362 | 264 |
| 5.1275 | 6.0367 | 265 |
| 5.1218 | 6.0447 | 266 |
| 5.1144 | 6.0636 | 267 |
| 5.1152 | 6.0556 | 268 |
| 5.1083 | 6.0503 | 269 |
| 5.1046 | 6.0597 | 270 |
| 5.0923 | 6.0726 | 271 |
| 5.0988 | 6.0692 | 272 |
| 5.0926 | 6.0654 | 273 |
| 5.0892 | 6.0757 | 274 |
| 5.0772 | 6.0547 | 275 |
| 5.0774 | 6.0703 | 276 |
| 5.0696 | 6.0715 | 277 |
| 5.0645 | 6.0838 | 278 |
| 5.0599 | 6.0687 | 279 |
| 5.0565 | 6.0621 | 280 |
| 5.0535 | 6.0846 | 281 |
| 5.0409 | 6.0779 | 282 |
| 5.0413 | 6.0753 | 283 |
| 5.0380 | 6.0609 | 284 |
| 5.0336 | 6.0889 | 285 |
| 5.0248 | 6.0762 | 286 |
| 5.0230 | 6.0876 | 287 |
| 5.0155 | 6.0588 | 288 |
| 5.0121 | 6.0788 | 289 |
| 5.0035 | 6.0777 | 290 |
| 5.0067 | 6.0848 | 291 |
| 5.0016 | 6.0831 | 292 |
| 4.9929 | 6.0991 | 293 |
| 4.9889 | 6.1011 | 294 |
| 4.9837 | 6.0805 | 295 |
| 4.9777 | 6.0858 | 296 |
| 4.9738 | 6.0803 | 297 |
| 4.9708 | 6.0757 | 298 |
| 4.9677 | 6.0886 | 299 |
| 4.9630 | 6.0828 | 300 |
| 4.9541 | 6.0883 | 301 |
| 4.9541 | 6.1026 | 302 |
| 4.9453 | 6.0925 | 303 |
| 4.9385 | 6.0854 | 304 |
| 4.9337 | 6.1038 | 305 |
| 4.9290 | 6.0854 | 306 |
| 4.9287 | 6.1008 | 307 |
| 4.9214 | 6.1174 | 308 |
| 4.9151 | 6.1056 | 309 |
| 4.9118 | 6.0934 | 310 |
| 4.9087 | 6.0919 | 311 |
| 4.8985 | 6.1064 | 312 |
| 4.9003 | 6.1010 | 313 |
| 4.8951 | 6.1118 | 314 |
| 4.8824 | 6.1020 | 315 |
| 4.8834 | 6.1020 | 316 |
| 4.8764 | 6.1173 | 317 |
| 4.8704 | 6.1189 | 318 |
| 4.8690 | 6.0976 | 319 |
| 4.8662 | 6.1058 | 320 |
| 4.8586 | 6.1060 | 321 |
| 4.8571 | 6.1026 | 322 |
| 4.8514 | 6.1102 | 323 |
| 4.8426 | 6.1298 | 324 |
| 4.8375 | 6.1047 | 325 |
| 4.8341 | 6.1111 | 326 |
| 4.8303 | 6.1144 | 327 |
| 4.8320 | 6.1271 | 328 |
| 4.8190 | 6.1221 | 329 |
| 4.8214 | 6.1342 | 330 |
| 4.8055 | 6.1497 | 331 |
| 4.8082 | 6.1288 | 332 |
| 4.7967 | 6.1218 | 333 |
| 4.7966 | 6.1433 | 334 |
| 4.7859 | 6.1117 | 335 |
| 4.7841 | 6.1447 | 336 |
| 4.7871 | 6.1406 | 337 |
| 4.7743 | 6.1606 | 338 |
| 4.7696 | 6.1391 | 339 |
| 4.7652 | 6.1216 | 340 |
| 4.7684 | 6.1420 | 341 |
| 4.7607 | 6.1365 | 342 |
| 4.7596 | 6.1462 | 343 |
| 4.7539 | 6.1352 | 344 |
| 4.7382 | 6.1507 | 345 |
| 4.7425 | 6.1461 | 346 |
| 4.7299 | 6.1556 | 347 |
| 4.7268 | 6.1298 | 348 |
| 4.7245 | 6.1589 | 349 |
### Framework versions
- Transformers 4.33.3
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
SangsooIm/distilbert-base-uncased-distilled-clinc
|
SangsooIm
| 2023-10-04T03:34:07Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-04T03:29:36Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9451612903225807
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2695
- Accuracy: 0.9452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2521 | 1.0 | 318 | 1.5880 | 0.7184 |
| 1.2228 | 2.0 | 636 | 0.7961 | 0.8648 |
| 0.647 | 3.0 | 954 | 0.4671 | 0.9184 |
| 0.3987 | 4.0 | 1272 | 0.3505 | 0.9358 |
| 0.2955 | 5.0 | 1590 | 0.3058 | 0.9410 |
| 0.2513 | 6.0 | 1908 | 0.2863 | 0.9452 |
| 0.231 | 7.0 | 2226 | 0.2785 | 0.9461 |
| 0.2194 | 8.0 | 2544 | 0.2733 | 0.9461 |
| 0.213 | 9.0 | 2862 | 0.2709 | 0.9452 |
| 0.2099 | 10.0 | 3180 | 0.2695 | 0.9452 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
kanishka/smolm-autoreg-bpe-babylm-no_aann-infilling-1e-3
|
kanishka
| 2023-10-04T03:21:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-03T02:32:17Z |
---
base_model: models/smolm-autoreg-bpe-babylm-no_aann-infilling-1e-3/config.json
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-babylm-no_aann-infilling-1e-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-babylm-no_aann-infilling-1e-3
This model is a fine-tuned version of [models/smolm-autoreg-bpe-babylm-no_aann-infilling-1e-3/config.json](https://huggingface.co/models/smolm-autoreg-bpe-babylm-no_aann-infilling-1e-3/config.json) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1612
- Accuracy: 0.4318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 64
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.5506 | 1.0 | 9180 | 3.6452 | 0.3697 |
| 3.2731 | 2.0 | 18360 | 3.4078 | 0.3940 |
| 3.1564 | 3.0 | 27540 | 3.3062 | 0.4059 |
| 3.0714 | 4.0 | 36720 | 3.2481 | 0.4127 |
| 2.9886 | 5.0 | 45900 | 3.1881 | 0.4192 |
| 2.9236 | 6.0 | 55080 | 3.1613 | 0.4228 |
| 2.8816 | 7.0 | 64260 | 3.1470 | 0.4256 |
| 2.8446 | 8.0 | 73440 | 3.1379 | 0.4270 |
| 2.8065 | 9.0 | 82620 | 3.1353 | 0.4285 |
| 2.7774 | 10.0 | 91800 | 3.1329 | 0.4293 |
| 2.7501 | 11.0 | 100980 | 3.1256 | 0.4306 |
| 2.7231 | 12.0 | 110160 | 3.1269 | 0.4309 |
| 2.701 | 13.0 | 119340 | 3.1334 | 0.4309 |
| 2.6756 | 14.0 | 128520 | 3.1308 | 0.4317 |
| 2.6533 | 15.0 | 137700 | 3.1365 | 0.4319 |
| 2.6306 | 16.0 | 146880 | 3.1389 | 0.4318 |
| 2.6082 | 17.0 | 156060 | 3.1452 | 0.4317 |
| 2.5874 | 18.0 | 165240 | 3.1518 | 0.4320 |
| 2.5684 | 19.0 | 174420 | 3.1557 | 0.4318 |
| 2.548 | 20.0 | 183600 | 3.1612 | 0.4318 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
oliverwang15/FinGPT_v32_Llama2_Sentiment_Instruction_LoRA_FT
|
oliverwang15
| 2023-10-04T02:38:41Z | 15 | 18 |
peft
|
[
"peft",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-08-01T13:50:01Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
aichoux/swin-tiny-patch4-window7-224-finetuned-eurosat
|
aichoux
| 2023-10-04T02:37:12Z | 215 | 0 |
transformers
|
[
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-04T01:55:59Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8901960784313725
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3209
- Accuracy: 0.8902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 8 | 2.7448 | 0.0314 |
| 2.7716 | 2.0 | 16 | 2.5834 | 0.1765 |
| 2.5974 | 3.0 | 24 | 2.3608 | 0.3020 |
| 2.3426 | 4.0 | 32 | 2.1157 | 0.3333 |
| 1.9747 | 5.0 | 40 | 1.7539 | 0.4627 |
| 1.9747 | 6.0 | 48 | 1.3641 | 0.6078 |
| 1.5182 | 7.0 | 56 | 1.0755 | 0.6471 |
| 1.198 | 8.0 | 64 | 0.8743 | 0.7216 |
| 1.0206 | 9.0 | 72 | 0.7666 | 0.7294 |
| 0.8731 | 10.0 | 80 | 0.7035 | 0.7490 |
| 0.8731 | 11.0 | 88 | 0.6122 | 0.7608 |
| 0.7938 | 12.0 | 96 | 0.6508 | 0.7490 |
| 0.7286 | 13.0 | 104 | 0.5081 | 0.7961 |
| 0.659 | 14.0 | 112 | 0.5536 | 0.7961 |
| 0.6232 | 15.0 | 120 | 0.5079 | 0.8 |
| 0.6232 | 16.0 | 128 | 0.4483 | 0.8314 |
| 0.6028 | 17.0 | 136 | 0.4096 | 0.8157 |
| 0.5333 | 18.0 | 144 | 0.3710 | 0.8510 |
| 0.5053 | 19.0 | 152 | 0.4810 | 0.8039 |
| 0.4717 | 20.0 | 160 | 0.4121 | 0.8235 |
| 0.4717 | 21.0 | 168 | 0.4021 | 0.8392 |
| 0.4728 | 22.0 | 176 | 0.3780 | 0.8588 |
| 0.4347 | 23.0 | 184 | 0.3374 | 0.8745 |
| 0.4545 | 24.0 | 192 | 0.4056 | 0.8431 |
| 0.3954 | 25.0 | 200 | 0.4088 | 0.8745 |
| 0.3954 | 26.0 | 208 | 0.4169 | 0.8392 |
| 0.4145 | 27.0 | 216 | 0.3262 | 0.8706 |
| 0.3895 | 28.0 | 224 | 0.4235 | 0.8706 |
| 0.4185 | 29.0 | 232 | 0.3482 | 0.8706 |
| 0.3686 | 30.0 | 240 | 0.3088 | 0.8824 |
| 0.3686 | 31.0 | 248 | 0.3230 | 0.8902 |
| 0.3617 | 32.0 | 256 | 0.3473 | 0.8824 |
| 0.3136 | 33.0 | 264 | 0.3793 | 0.8627 |
| 0.3482 | 34.0 | 272 | 0.3477 | 0.8588 |
| 0.3519 | 35.0 | 280 | 0.3692 | 0.8667 |
| 0.3519 | 36.0 | 288 | 0.3611 | 0.8627 |
| 0.3311 | 37.0 | 296 | 0.3233 | 0.8745 |
| 0.3222 | 38.0 | 304 | 0.3416 | 0.8627 |
| 0.3013 | 39.0 | 312 | 0.3198 | 0.8824 |
| 0.2871 | 40.0 | 320 | 0.3308 | 0.8667 |
| 0.2871 | 41.0 | 328 | 0.3246 | 0.8667 |
| 0.3154 | 42.0 | 336 | 0.3943 | 0.8667 |
| 0.2735 | 43.0 | 344 | 0.3186 | 0.8784 |
| 0.2911 | 44.0 | 352 | 0.3132 | 0.8824 |
| 0.266 | 45.0 | 360 | 0.3204 | 0.8980 |
| 0.266 | 46.0 | 368 | 0.3097 | 0.8784 |
| 0.2686 | 47.0 | 376 | 0.3075 | 0.8902 |
| 0.2818 | 48.0 | 384 | 0.3192 | 0.8902 |
| 0.2492 | 49.0 | 392 | 0.3434 | 0.8745 |
| 0.276 | 50.0 | 400 | 0.3237 | 0.8824 |
| 0.276 | 51.0 | 408 | 0.3450 | 0.8745 |
| 0.245 | 52.0 | 416 | 0.3284 | 0.8706 |
| 0.2292 | 53.0 | 424 | 0.3263 | 0.8902 |
| 0.2252 | 54.0 | 432 | 0.3216 | 0.8745 |
| 0.2483 | 55.0 | 440 | 0.3359 | 0.8863 |
| 0.2483 | 56.0 | 448 | 0.3314 | 0.8902 |
| 0.2549 | 57.0 | 456 | 0.3932 | 0.8745 |
| 0.2247 | 58.0 | 464 | 0.3189 | 0.8745 |
| 0.2344 | 59.0 | 472 | 0.3251 | 0.8745 |
| 0.2315 | 60.0 | 480 | 0.3289 | 0.8824 |
| 0.2315 | 61.0 | 488 | 0.3058 | 0.8745 |
| 0.2109 | 62.0 | 496 | 0.2999 | 0.8863 |
| 0.2325 | 63.0 | 504 | 0.3078 | 0.8980 |
| 0.2126 | 64.0 | 512 | 0.3531 | 0.8784 |
| 0.1975 | 65.0 | 520 | 0.3394 | 0.8902 |
| 0.1975 | 66.0 | 528 | 0.3113 | 0.8902 |
| 0.1998 | 67.0 | 536 | 0.3365 | 0.8941 |
| 0.2208 | 68.0 | 544 | 0.2854 | 0.9020 |
| 0.2126 | 69.0 | 552 | 0.3170 | 0.8941 |
| 0.2352 | 70.0 | 560 | 0.3155 | 0.8824 |
| 0.2352 | 71.0 | 568 | 0.3327 | 0.8824 |
| 0.1724 | 72.0 | 576 | 0.3503 | 0.8902 |
| 0.2038 | 73.0 | 584 | 0.3309 | 0.8824 |
| 0.1919 | 74.0 | 592 | 0.3299 | 0.8902 |
| 0.2199 | 75.0 | 600 | 0.3347 | 0.8863 |
| 0.2199 | 76.0 | 608 | 0.3471 | 0.8824 |
| 0.2075 | 77.0 | 616 | 0.3437 | 0.8863 |
| 0.2206 | 78.0 | 624 | 0.3161 | 0.8824 |
| 0.1655 | 79.0 | 632 | 0.3227 | 0.8784 |
| 0.1765 | 80.0 | 640 | 0.3302 | 0.8784 |
| 0.1765 | 81.0 | 648 | 0.3153 | 0.8745 |
| 0.1832 | 82.0 | 656 | 0.3010 | 0.8745 |
| 0.185 | 83.0 | 664 | 0.3266 | 0.8941 |
| 0.1627 | 84.0 | 672 | 0.3192 | 0.8941 |
| 0.176 | 85.0 | 680 | 0.3125 | 0.8863 |
| 0.176 | 86.0 | 688 | 0.3241 | 0.8745 |
| 0.1723 | 87.0 | 696 | 0.3124 | 0.8784 |
| 0.1477 | 88.0 | 704 | 0.3109 | 0.8745 |
| 0.1703 | 89.0 | 712 | 0.3196 | 0.8824 |
| 0.1919 | 90.0 | 720 | 0.3186 | 0.8980 |
| 0.1919 | 91.0 | 728 | 0.3178 | 0.8902 |
| 0.1465 | 92.0 | 736 | 0.3241 | 0.8824 |
| 0.155 | 93.0 | 744 | 0.3281 | 0.8784 |
| 0.1829 | 94.0 | 752 | 0.3263 | 0.8824 |
| 0.167 | 95.0 | 760 | 0.3282 | 0.8824 |
| 0.167 | 96.0 | 768 | 0.3290 | 0.8824 |
| 0.166 | 97.0 | 776 | 0.3253 | 0.8902 |
| 0.1756 | 98.0 | 784 | 0.3231 | 0.8863 |
| 0.157 | 99.0 | 792 | 0.3215 | 0.8902 |
| 0.1492 | 100.0 | 800 | 0.3209 | 0.8902 |
### Framework versions
- Transformers 4.33.3
- Pytorch 1.11.0+cu113
- Datasets 2.14.5
- Tokenizers 0.13.3
|
abeiler/AlphaRepInstruct
|
abeiler
| 2023-10-04T01:55:55Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-03T21:34:36Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: goatAlphaRepInstruct-QLORA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# goatAlphaRepInstruct-QLORA
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
LarryAIDraw/zeta_ver1_0
|
LarryAIDraw
| 2023-10-04T01:24:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-03T01:10:31Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/155516/zeta-or-kage-no-jitsuryokusha-ni-naritakute
|
TanmaySah/misfil
|
TanmaySah
| 2023-10-04T01:22:59Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-03T18:31:53Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
|
SangsooIm/distilbert-base-uncased-finetuned-clinc
|
SangsooIm
| 2023-10-04T01:22:04Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-03T12:47:32Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9203225806451613
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7647
- Accuracy: 0.9203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2957 | 1.0 | 318 | 3.2774 | 0.7229 |
| 2.6097 | 2.0 | 636 | 1.8593 | 0.8435 |
| 1.5339 | 3.0 | 954 | 1.1434 | 0.8977 |
| 1.0029 | 4.0 | 1272 | 0.8472 | 0.9132 |
| 0.784 | 5.0 | 1590 | 0.7647 | 0.9203 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
AchyuthGamer/ImMagician-Fantasy
|
AchyuthGamer
| 2023-10-04T01:19:58Z | 167 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"MagicArt35",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-03T11:49:46Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- MagicArt35
---
# ImMagician
This a Model which generates Images based on Prompts.
This is an amazing model which generates images with love, honesty and obedience.
This Image model is made by Achyuth.
To access this model with UI, visit [ImMagician](https://achyuthgamer-immagician.hf.space)
See Some Examples here:

.jpg)
.jpg)

.jpg)
|
LarryAIDraw/sakurasawa_sumi-10
|
LarryAIDraw
| 2023-10-04T01:17:10Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-04T00:56:49Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/155206/sakurasawa-sumi-rent-a-girlfriend-lora
|
LarryAIDraw/gertrude_remake
|
LarryAIDraw
| 2023-10-04T01:15:04Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-04T00:54:28Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/26531/arknights-gertrude
|
bdpc/resnet101_rvl-cdip-cnn_rvl_cdip-NK1000_kd_MSE
|
bdpc
| 2023-10-04T01:12:27Z | 196 | 0 |
transformers
|
[
"transformers",
"pytorch",
"resnet",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-03T18:02:31Z |
---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet101_rvl-cdip-cnn_rvl_cdip-NK1000_kd_MSE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101_rvl-cdip-cnn_rvl_cdip-NK1000_kd_MSE
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7429
- Accuracy: 0.7853
- Brier Loss: 0.3044
- Nll: 2.0364
- F1 Micro: 0.7853
- F1 Macro: 0.7862
- Ece: 0.0430
- Aurc: 0.0599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 250 | 9.5443 | 0.0765 | 0.9365 | 3.7373 | 0.0765 | 0.0522 | 0.0360 | 0.9336 |
| 9.4095 | 2.0 | 500 | 7.4542 | 0.0757 | 0.9312 | 2.8468 | 0.0757 | 0.0316 | 0.0425 | 0.8840 |
| 9.4095 | 3.0 | 750 | 5.8933 | 0.0975 | 0.9356 | 3.2058 | 0.0975 | 0.0408 | 0.0798 | 0.8593 |
| 5.9994 | 4.0 | 1000 | 4.3665 | 0.2125 | 0.8700 | 5.3759 | 0.2125 | 0.1290 | 0.0743 | 0.7029 |
| 5.9994 | 5.0 | 1250 | 3.0367 | 0.4415 | 0.6924 | 4.9073 | 0.4415 | 0.4283 | 0.0806 | 0.3570 |
| 3.2184 | 6.0 | 1500 | 2.1589 | 0.579 | 0.5587 | 3.7412 | 0.579 | 0.5771 | 0.0572 | 0.2172 |
| 3.2184 | 7.0 | 1750 | 1.5582 | 0.652 | 0.4673 | 3.0701 | 0.652 | 0.6456 | 0.0517 | 0.1478 |
| 1.6737 | 8.0 | 2000 | 1.3502 | 0.6893 | 0.4266 | 2.8575 | 0.6893 | 0.6860 | 0.0544 | 0.1175 |
| 1.6737 | 9.0 | 2250 | 1.1389 | 0.7188 | 0.3914 | 2.5937 | 0.7188 | 0.7195 | 0.0544 | 0.1006 |
| 1.0789 | 10.0 | 2500 | 1.0563 | 0.7302 | 0.3742 | 2.5043 | 0.7302 | 0.7305 | 0.0618 | 0.0912 |
| 1.0789 | 11.0 | 2750 | 1.0035 | 0.7428 | 0.3604 | 2.4375 | 0.7428 | 0.7441 | 0.0587 | 0.0823 |
| 0.7934 | 12.0 | 3000 | 0.9169 | 0.7548 | 0.3472 | 2.2921 | 0.7548 | 0.7555 | 0.0547 | 0.0762 |
| 0.7934 | 13.0 | 3250 | 0.8628 | 0.7598 | 0.3386 | 2.2849 | 0.7598 | 0.7600 | 0.0550 | 0.0739 |
| 0.6268 | 14.0 | 3500 | 0.8773 | 0.7675 | 0.3362 | 2.2170 | 0.7675 | 0.7692 | 0.0490 | 0.0718 |
| 0.6268 | 15.0 | 3750 | 0.8263 | 0.7682 | 0.3306 | 2.1617 | 0.7682 | 0.7702 | 0.0534 | 0.0704 |
| 0.5269 | 16.0 | 4000 | 0.8422 | 0.7708 | 0.3289 | 2.1907 | 0.7707 | 0.7717 | 0.0524 | 0.0687 |
| 0.5269 | 17.0 | 4250 | 0.8100 | 0.7745 | 0.3241 | 2.1664 | 0.7745 | 0.7761 | 0.0509 | 0.0667 |
| 0.4516 | 18.0 | 4500 | 0.8013 | 0.7778 | 0.3215 | 2.1216 | 0.7778 | 0.7790 | 0.0473 | 0.0669 |
| 0.4516 | 19.0 | 4750 | 0.7911 | 0.7802 | 0.3183 | 2.1224 | 0.7802 | 0.7812 | 0.0476 | 0.0648 |
| 0.4039 | 20.0 | 5000 | 0.7900 | 0.7775 | 0.3197 | 2.0969 | 0.7775 | 0.7797 | 0.0473 | 0.0647 |
| 0.4039 | 21.0 | 5250 | 0.7919 | 0.7792 | 0.3191 | 2.1445 | 0.7792 | 0.7810 | 0.0531 | 0.0652 |
| 0.3563 | 22.0 | 5500 | 0.7960 | 0.7802 | 0.3166 | 2.0849 | 0.7802 | 0.7818 | 0.0478 | 0.0649 |
| 0.3563 | 23.0 | 5750 | 0.7615 | 0.7825 | 0.3128 | 2.0834 | 0.7825 | 0.7833 | 0.0478 | 0.0638 |
| 0.3251 | 24.0 | 6000 | 0.7840 | 0.7792 | 0.3151 | 2.0841 | 0.7792 | 0.7800 | 0.0513 | 0.0648 |
| 0.3251 | 25.0 | 6250 | 0.7837 | 0.7792 | 0.3159 | 2.0889 | 0.7792 | 0.7808 | 0.0485 | 0.0643 |
| 0.2949 | 26.0 | 6500 | 0.7827 | 0.7802 | 0.3158 | 2.0416 | 0.7802 | 0.7819 | 0.0548 | 0.0648 |
| 0.2949 | 27.0 | 6750 | 0.7650 | 0.78 | 0.3130 | 2.0411 | 0.78 | 0.7807 | 0.0506 | 0.0629 |
| 0.2669 | 28.0 | 7000 | 0.7787 | 0.7802 | 0.3133 | 2.0843 | 0.7802 | 0.7810 | 0.0454 | 0.0627 |
| 0.2669 | 29.0 | 7250 | 0.7892 | 0.782 | 0.3163 | 2.0953 | 0.782 | 0.7826 | 0.0508 | 0.0635 |
| 0.2512 | 30.0 | 7500 | 0.7775 | 0.7825 | 0.3126 | 2.0904 | 0.7825 | 0.7837 | 0.0451 | 0.0633 |
| 0.2512 | 31.0 | 7750 | 0.7601 | 0.7817 | 0.3124 | 2.0251 | 0.7817 | 0.7827 | 0.0485 | 0.0627 |
| 0.231 | 32.0 | 8000 | 0.7669 | 0.7833 | 0.3120 | 2.0685 | 0.7833 | 0.7842 | 0.0472 | 0.0629 |
| 0.231 | 33.0 | 8250 | 0.7652 | 0.7847 | 0.3116 | 2.0661 | 0.7847 | 0.7858 | 0.0519 | 0.0625 |
| 0.2172 | 34.0 | 8500 | 0.7637 | 0.7837 | 0.3107 | 2.0264 | 0.7837 | 0.7852 | 0.0487 | 0.0628 |
| 0.2172 | 35.0 | 8750 | 0.7691 | 0.783 | 0.3120 | 2.0535 | 0.7830 | 0.7844 | 0.0438 | 0.0634 |
| 0.2032 | 36.0 | 9000 | 0.7647 | 0.7845 | 0.3093 | 2.0480 | 0.7845 | 0.7852 | 0.0471 | 0.0620 |
| 0.2032 | 37.0 | 9250 | 0.7727 | 0.782 | 0.3122 | 2.0610 | 0.782 | 0.7830 | 0.0493 | 0.0628 |
| 0.1925 | 38.0 | 9500 | 0.7563 | 0.7843 | 0.3085 | 2.0267 | 0.7843 | 0.7849 | 0.0459 | 0.0608 |
| 0.1925 | 39.0 | 9750 | 0.7597 | 0.7835 | 0.3087 | 2.0062 | 0.7835 | 0.7845 | 0.0485 | 0.0614 |
| 0.1823 | 40.0 | 10000 | 0.7611 | 0.7833 | 0.3107 | 2.0007 | 0.7833 | 0.7853 | 0.0479 | 0.0625 |
| 0.1823 | 41.0 | 10250 | 0.7608 | 0.7843 | 0.3076 | 2.0335 | 0.7843 | 0.7854 | 0.0486 | 0.0602 |
| 0.17 | 42.0 | 10500 | 0.7535 | 0.7833 | 0.3096 | 2.0121 | 0.7833 | 0.7844 | 0.0505 | 0.0613 |
| 0.17 | 43.0 | 10750 | 0.7524 | 0.7845 | 0.3066 | 2.0425 | 0.7845 | 0.7856 | 0.0476 | 0.0605 |
| 0.1639 | 44.0 | 11000 | 0.7608 | 0.7808 | 0.3108 | 2.0739 | 0.7808 | 0.7816 | 0.0503 | 0.0618 |
| 0.1639 | 45.0 | 11250 | 0.7560 | 0.786 | 0.3063 | 1.9876 | 0.786 | 0.7868 | 0.0496 | 0.0607 |
| 0.1575 | 46.0 | 11500 | 0.7494 | 0.784 | 0.3063 | 2.0311 | 0.7840 | 0.7846 | 0.0416 | 0.0601 |
| 0.1575 | 47.0 | 11750 | 0.7515 | 0.7857 | 0.3069 | 2.0539 | 0.7857 | 0.7866 | 0.0456 | 0.0609 |
| 0.1493 | 48.0 | 12000 | 0.7511 | 0.7843 | 0.3086 | 2.0325 | 0.7843 | 0.7852 | 0.0552 | 0.0612 |
| 0.1493 | 49.0 | 12250 | 0.7495 | 0.787 | 0.3067 | 2.0231 | 0.787 | 0.7880 | 0.0475 | 0.0605 |
| 0.1425 | 50.0 | 12500 | 0.7538 | 0.7867 | 0.3052 | 2.0267 | 0.7868 | 0.7870 | 0.0507 | 0.0603 |
| 0.1425 | 51.0 | 12750 | 0.7529 | 0.7847 | 0.3081 | 2.0592 | 0.7847 | 0.7859 | 0.0467 | 0.0604 |
| 0.1356 | 52.0 | 13000 | 0.7527 | 0.7808 | 0.3071 | 2.0349 | 0.7808 | 0.7818 | 0.0473 | 0.0607 |
| 0.1356 | 53.0 | 13250 | 0.7451 | 0.7865 | 0.3049 | 2.0368 | 0.7865 | 0.7879 | 0.0484 | 0.0595 |
| 0.1325 | 54.0 | 13500 | 0.7481 | 0.7857 | 0.3056 | 2.0223 | 0.7857 | 0.7869 | 0.0468 | 0.0603 |
| 0.1325 | 55.0 | 13750 | 0.7470 | 0.7835 | 0.3057 | 2.0306 | 0.7835 | 0.7844 | 0.0492 | 0.0601 |
| 0.1264 | 56.0 | 14000 | 0.7471 | 0.7873 | 0.3053 | 2.0336 | 0.7873 | 0.7880 | 0.0519 | 0.0601 |
| 0.1264 | 57.0 | 14250 | 0.7429 | 0.7895 | 0.3032 | 2.0149 | 0.7895 | 0.7903 | 0.0468 | 0.0595 |
| 0.1208 | 58.0 | 14500 | 0.7399 | 0.7885 | 0.3035 | 2.0147 | 0.7885 | 0.7895 | 0.0433 | 0.0596 |
| 0.1208 | 59.0 | 14750 | 0.7518 | 0.786 | 0.3076 | 2.0481 | 0.786 | 0.7873 | 0.0403 | 0.0607 |
| 0.119 | 60.0 | 15000 | 0.7483 | 0.7903 | 0.3058 | 2.0138 | 0.7903 | 0.7914 | 0.0471 | 0.0601 |
| 0.119 | 61.0 | 15250 | 0.7463 | 0.7845 | 0.3043 | 2.0617 | 0.7845 | 0.7855 | 0.0458 | 0.0599 |
| 0.1128 | 62.0 | 15500 | 0.7478 | 0.7875 | 0.3056 | 2.0187 | 0.7875 | 0.7888 | 0.0452 | 0.0604 |
| 0.1128 | 63.0 | 15750 | 0.7510 | 0.784 | 0.3061 | 2.0204 | 0.7840 | 0.7850 | 0.0495 | 0.0605 |
| 0.1109 | 64.0 | 16000 | 0.7424 | 0.786 | 0.3053 | 2.0167 | 0.786 | 0.7871 | 0.0449 | 0.0603 |
| 0.1109 | 65.0 | 16250 | 0.7473 | 0.7885 | 0.3054 | 2.0200 | 0.7885 | 0.7893 | 0.0471 | 0.0600 |
| 0.1078 | 66.0 | 16500 | 0.7467 | 0.7873 | 0.3054 | 2.0224 | 0.7873 | 0.7883 | 0.0482 | 0.0599 |
| 0.1078 | 67.0 | 16750 | 0.7445 | 0.7893 | 0.3039 | 2.0082 | 0.7893 | 0.7895 | 0.0456 | 0.0593 |
| 0.1051 | 68.0 | 17000 | 0.7490 | 0.7873 | 0.3063 | 2.0152 | 0.7873 | 0.7883 | 0.0505 | 0.0602 |
| 0.1051 | 69.0 | 17250 | 0.7490 | 0.785 | 0.3061 | 2.0103 | 0.785 | 0.7861 | 0.0465 | 0.0602 |
| 0.1009 | 70.0 | 17500 | 0.7445 | 0.7875 | 0.3049 | 2.0308 | 0.7875 | 0.7884 | 0.0483 | 0.0598 |
| 0.1009 | 71.0 | 17750 | 0.7490 | 0.7863 | 0.3068 | 2.0260 | 0.7863 | 0.7875 | 0.0495 | 0.0604 |
| 0.0984 | 72.0 | 18000 | 0.7465 | 0.7893 | 0.3059 | 2.0161 | 0.7893 | 0.7906 | 0.0427 | 0.0601 |
| 0.0984 | 73.0 | 18250 | 0.7451 | 0.7873 | 0.3058 | 2.0204 | 0.7873 | 0.7882 | 0.0511 | 0.0605 |
| 0.0966 | 74.0 | 18500 | 0.7445 | 0.7875 | 0.3042 | 2.0227 | 0.7875 | 0.7886 | 0.0495 | 0.0599 |
| 0.0966 | 75.0 | 18750 | 0.7443 | 0.7863 | 0.3040 | 2.0138 | 0.7863 | 0.7872 | 0.0442 | 0.0598 |
| 0.0947 | 76.0 | 19000 | 0.7448 | 0.7865 | 0.3054 | 2.0234 | 0.7865 | 0.7873 | 0.0457 | 0.0598 |
| 0.0947 | 77.0 | 19250 | 0.7448 | 0.7865 | 0.3041 | 2.0110 | 0.7865 | 0.7875 | 0.0508 | 0.0596 |
| 0.0931 | 78.0 | 19500 | 0.7460 | 0.7883 | 0.3040 | 2.0125 | 0.7883 | 0.7895 | 0.0467 | 0.0595 |
| 0.0931 | 79.0 | 19750 | 0.7456 | 0.7883 | 0.3038 | 2.0302 | 0.7883 | 0.7894 | 0.0455 | 0.0596 |
| 0.0899 | 80.0 | 20000 | 0.7469 | 0.788 | 0.3040 | 2.0188 | 0.788 | 0.7892 | 0.0487 | 0.0597 |
| 0.0899 | 81.0 | 20250 | 0.7421 | 0.788 | 0.3041 | 2.0359 | 0.788 | 0.7888 | 0.0427 | 0.0595 |
| 0.0882 | 82.0 | 20500 | 0.7444 | 0.7865 | 0.3051 | 2.0219 | 0.7865 | 0.7875 | 0.0479 | 0.0600 |
| 0.0882 | 83.0 | 20750 | 0.7439 | 0.788 | 0.3039 | 2.0197 | 0.788 | 0.7894 | 0.0439 | 0.0597 |
| 0.0871 | 84.0 | 21000 | 0.7421 | 0.7865 | 0.3040 | 1.9910 | 0.7865 | 0.7876 | 0.0445 | 0.0598 |
| 0.0871 | 85.0 | 21250 | 0.7429 | 0.7887 | 0.3043 | 2.0253 | 0.7887 | 0.7898 | 0.0426 | 0.0597 |
| 0.0869 | 86.0 | 21500 | 0.7442 | 0.7873 | 0.3041 | 2.0156 | 0.7873 | 0.7885 | 0.0488 | 0.0596 |
| 0.0869 | 87.0 | 21750 | 0.7439 | 0.7857 | 0.3051 | 2.0099 | 0.7857 | 0.7867 | 0.0465 | 0.0599 |
| 0.084 | 88.0 | 22000 | 0.7434 | 0.786 | 0.3040 | 1.9926 | 0.786 | 0.7869 | 0.0469 | 0.0598 |
| 0.084 | 89.0 | 22250 | 0.7431 | 0.7873 | 0.3048 | 2.0028 | 0.7873 | 0.7880 | 0.0442 | 0.0599 |
| 0.0821 | 90.0 | 22500 | 0.7447 | 0.7867 | 0.3040 | 2.0349 | 0.7868 | 0.7876 | 0.0477 | 0.0596 |
| 0.0821 | 91.0 | 22750 | 0.7443 | 0.7877 | 0.3051 | 2.0356 | 0.7877 | 0.7887 | 0.0486 | 0.0601 |
| 0.0813 | 92.0 | 23000 | 0.7500 | 0.7873 | 0.3053 | 2.0465 | 0.7873 | 0.7880 | 0.0484 | 0.0601 |
| 0.0813 | 93.0 | 23250 | 0.7449 | 0.788 | 0.3037 | 1.9966 | 0.788 | 0.7890 | 0.0441 | 0.0594 |
| 0.0811 | 94.0 | 23500 | 0.7466 | 0.7897 | 0.3048 | 2.0297 | 0.7897 | 0.7907 | 0.0429 | 0.0600 |
| 0.0811 | 95.0 | 23750 | 0.7482 | 0.7875 | 0.3058 | 2.0319 | 0.7875 | 0.7885 | 0.0464 | 0.0601 |
| 0.0808 | 96.0 | 24000 | 0.7473 | 0.7863 | 0.3055 | 2.0219 | 0.7863 | 0.7874 | 0.0477 | 0.0603 |
| 0.0808 | 97.0 | 24250 | 0.7451 | 0.7855 | 0.3044 | 2.0356 | 0.7855 | 0.7865 | 0.0481 | 0.0594 |
| 0.08 | 98.0 | 24500 | 0.7442 | 0.7857 | 0.3042 | 2.0213 | 0.7857 | 0.7868 | 0.0475 | 0.0595 |
| 0.08 | 99.0 | 24750 | 0.7462 | 0.7863 | 0.3053 | 2.0354 | 0.7863 | 0.7874 | 0.0425 | 0.0599 |
| 0.079 | 100.0 | 25000 | 0.7429 | 0.7853 | 0.3044 | 2.0364 | 0.7853 | 0.7862 | 0.0430 | 0.0599 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
bdpc/resnet101_rvl-cdip-cnn_rvl_cdip-NK1000_kd_CEKD_t2.5_a0.5
|
bdpc
| 2023-10-04T01:11:55Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"resnet",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-03T18:02:30Z |
---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet101_rvl-cdip-cnn_rvl_cdip-NK1000_kd_CEKD_t2.5_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101_rvl-cdip-cnn_rvl_cdip-NK1000_kd_CEKD_t2.5_a0.5
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5837
- Accuracy: 0.7867
- Brier Loss: 0.3013
- Nll: 1.9882
- F1 Micro: 0.7868
- F1 Macro: 0.7860
- Ece: 0.0529
- Aurc: 0.0581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 250 | 4.1958 | 0.1035 | 0.9350 | 9.1004 | 0.1035 | 0.0792 | 0.0472 | 0.9013 |
| 4.2322 | 2.0 | 500 | 4.0778 | 0.173 | 0.9251 | 6.5742 | 0.173 | 0.1393 | 0.0993 | 0.7501 |
| 4.2322 | 3.0 | 750 | 3.6484 | 0.339 | 0.8778 | 4.9108 | 0.339 | 0.2957 | 0.2172 | 0.5305 |
| 3.5256 | 4.0 | 1000 | 2.5967 | 0.4592 | 0.6991 | 3.3640 | 0.4592 | 0.4220 | 0.1274 | 0.3285 |
| 3.5256 | 5.0 | 1250 | 2.0345 | 0.5417 | 0.6078 | 3.0118 | 0.5417 | 0.5180 | 0.0976 | 0.2447 |
| 1.9172 | 6.0 | 1500 | 1.4417 | 0.625 | 0.5029 | 2.7890 | 0.625 | 0.6123 | 0.0549 | 0.1623 |
| 1.9172 | 7.0 | 1750 | 1.3298 | 0.639 | 0.4852 | 2.6110 | 0.639 | 0.6320 | 0.0558 | 0.1501 |
| 1.1801 | 8.0 | 2000 | 1.1697 | 0.674 | 0.4473 | 2.4787 | 0.674 | 0.6712 | 0.0466 | 0.1283 |
| 1.1801 | 9.0 | 2250 | 0.9625 | 0.7093 | 0.4020 | 2.3242 | 0.7093 | 0.7085 | 0.0526 | 0.1017 |
| 0.8029 | 10.0 | 2500 | 0.9477 | 0.7215 | 0.3893 | 2.3193 | 0.7215 | 0.7228 | 0.0515 | 0.0971 |
| 0.8029 | 11.0 | 2750 | 0.8527 | 0.7375 | 0.3692 | 2.2785 | 0.7375 | 0.7377 | 0.0490 | 0.0870 |
| 0.5717 | 12.0 | 3000 | 0.7377 | 0.7515 | 0.3470 | 2.1475 | 0.7515 | 0.7529 | 0.0552 | 0.0757 |
| 0.5717 | 13.0 | 3250 | 0.7309 | 0.7498 | 0.3469 | 2.1250 | 0.7498 | 0.7494 | 0.0589 | 0.0758 |
| 0.4414 | 14.0 | 3500 | 0.7165 | 0.7558 | 0.3427 | 2.1045 | 0.7558 | 0.7576 | 0.0582 | 0.0721 |
| 0.4414 | 15.0 | 3750 | 0.6865 | 0.7678 | 0.3319 | 2.0457 | 0.7678 | 0.7688 | 0.0551 | 0.0697 |
| 0.3691 | 16.0 | 4000 | 0.7002 | 0.7662 | 0.3348 | 2.1280 | 0.7663 | 0.7664 | 0.0567 | 0.0698 |
| 0.3691 | 17.0 | 4250 | 0.6896 | 0.7628 | 0.3326 | 2.0750 | 0.7628 | 0.7631 | 0.0608 | 0.0691 |
| 0.3214 | 18.0 | 4500 | 0.6666 | 0.7715 | 0.3258 | 2.0468 | 0.7715 | 0.7707 | 0.0544 | 0.0680 |
| 0.3214 | 19.0 | 4750 | 0.6735 | 0.7702 | 0.3277 | 2.0544 | 0.7702 | 0.7700 | 0.0571 | 0.0681 |
| 0.2914 | 20.0 | 5000 | 0.6607 | 0.772 | 0.3241 | 2.0364 | 0.772 | 0.7729 | 0.0525 | 0.0659 |
| 0.2914 | 21.0 | 5250 | 0.6625 | 0.7688 | 0.3217 | 2.0387 | 0.7688 | 0.7703 | 0.0455 | 0.0664 |
| 0.2653 | 22.0 | 5500 | 0.6543 | 0.775 | 0.3200 | 2.0560 | 0.775 | 0.7752 | 0.0507 | 0.0647 |
| 0.2653 | 23.0 | 5750 | 0.6409 | 0.7725 | 0.3188 | 2.0091 | 0.7725 | 0.7733 | 0.0554 | 0.0647 |
| 0.2482 | 24.0 | 6000 | 0.6452 | 0.7758 | 0.3191 | 2.0256 | 0.7758 | 0.7756 | 0.0502 | 0.0655 |
| 0.2482 | 25.0 | 6250 | 0.6401 | 0.7742 | 0.3196 | 2.0668 | 0.7742 | 0.7745 | 0.0528 | 0.0648 |
| 0.2354 | 26.0 | 6500 | 0.6316 | 0.775 | 0.3171 | 2.0150 | 0.775 | 0.7755 | 0.0555 | 0.0634 |
| 0.2354 | 27.0 | 6750 | 0.6257 | 0.7808 | 0.3147 | 2.0129 | 0.7808 | 0.7808 | 0.0503 | 0.0624 |
| 0.2229 | 28.0 | 7000 | 0.6343 | 0.7778 | 0.3144 | 2.0910 | 0.7778 | 0.7776 | 0.0510 | 0.0624 |
| 0.2229 | 29.0 | 7250 | 0.6206 | 0.781 | 0.3115 | 2.0399 | 0.7810 | 0.7798 | 0.0555 | 0.0606 |
| 0.2147 | 30.0 | 7500 | 0.6262 | 0.777 | 0.3124 | 2.0603 | 0.777 | 0.7772 | 0.0539 | 0.0616 |
| 0.2147 | 31.0 | 7750 | 0.6265 | 0.7788 | 0.3137 | 2.0833 | 0.7788 | 0.7777 | 0.0532 | 0.0614 |
| 0.2058 | 32.0 | 8000 | 0.6134 | 0.7815 | 0.3119 | 2.0369 | 0.7815 | 0.7815 | 0.0514 | 0.0615 |
| 0.2058 | 33.0 | 8250 | 0.6153 | 0.7772 | 0.3133 | 2.0513 | 0.7773 | 0.7772 | 0.0534 | 0.0623 |
| 0.1994 | 34.0 | 8500 | 0.6143 | 0.7853 | 0.3098 | 2.0188 | 0.7853 | 0.7857 | 0.0508 | 0.0611 |
| 0.1994 | 35.0 | 8750 | 0.6096 | 0.7827 | 0.3086 | 2.0134 | 0.7828 | 0.7828 | 0.0512 | 0.0606 |
| 0.1932 | 36.0 | 9000 | 0.6094 | 0.784 | 0.3067 | 2.0151 | 0.7840 | 0.7847 | 0.0471 | 0.0602 |
| 0.1932 | 37.0 | 9250 | 0.6142 | 0.7833 | 0.3111 | 2.0213 | 0.7833 | 0.7829 | 0.0542 | 0.0608 |
| 0.1895 | 38.0 | 9500 | 0.6103 | 0.7812 | 0.3094 | 2.0594 | 0.7812 | 0.7799 | 0.0529 | 0.0603 |
| 0.1895 | 39.0 | 9750 | 0.6059 | 0.781 | 0.3078 | 2.0386 | 0.7810 | 0.7806 | 0.0545 | 0.0607 |
| 0.1848 | 40.0 | 10000 | 0.6042 | 0.782 | 0.3072 | 2.0133 | 0.782 | 0.7824 | 0.0527 | 0.0603 |
| 0.1848 | 41.0 | 10250 | 0.5991 | 0.785 | 0.3043 | 2.0124 | 0.785 | 0.7853 | 0.0496 | 0.0594 |
| 0.1793 | 42.0 | 10500 | 0.6034 | 0.784 | 0.3058 | 2.0607 | 0.7840 | 0.7838 | 0.0490 | 0.0599 |
| 0.1793 | 43.0 | 10750 | 0.6047 | 0.7827 | 0.3068 | 2.0139 | 0.7828 | 0.7819 | 0.0492 | 0.0595 |
| 0.1768 | 44.0 | 11000 | 0.5982 | 0.785 | 0.3057 | 2.0303 | 0.785 | 0.7843 | 0.0473 | 0.0596 |
| 0.1768 | 45.0 | 11250 | 0.6036 | 0.7795 | 0.3087 | 2.0173 | 0.7795 | 0.7788 | 0.0549 | 0.0607 |
| 0.1743 | 46.0 | 11500 | 0.5974 | 0.785 | 0.3060 | 2.0026 | 0.785 | 0.7839 | 0.0478 | 0.0596 |
| 0.1743 | 47.0 | 11750 | 0.5996 | 0.782 | 0.3068 | 2.0144 | 0.782 | 0.7825 | 0.0480 | 0.0598 |
| 0.1707 | 48.0 | 12000 | 0.5958 | 0.7833 | 0.3079 | 2.0344 | 0.7833 | 0.7827 | 0.0500 | 0.0598 |
| 0.1707 | 49.0 | 12250 | 0.5969 | 0.782 | 0.3060 | 2.0162 | 0.782 | 0.7820 | 0.0482 | 0.0597 |
| 0.1683 | 50.0 | 12500 | 0.5933 | 0.784 | 0.3043 | 1.9897 | 0.7840 | 0.7836 | 0.0496 | 0.0589 |
| 0.1683 | 51.0 | 12750 | 0.5935 | 0.7833 | 0.3042 | 2.0142 | 0.7833 | 0.7829 | 0.0501 | 0.0586 |
| 0.1649 | 52.0 | 13000 | 0.5950 | 0.7847 | 0.3050 | 2.0125 | 0.7847 | 0.7851 | 0.0475 | 0.0591 |
| 0.1649 | 53.0 | 13250 | 0.5904 | 0.7837 | 0.3020 | 1.9830 | 0.7837 | 0.7837 | 0.0504 | 0.0584 |
| 0.1636 | 54.0 | 13500 | 0.5926 | 0.785 | 0.3042 | 2.0006 | 0.785 | 0.7845 | 0.0493 | 0.0588 |
| 0.1636 | 55.0 | 13750 | 0.5885 | 0.7847 | 0.3029 | 2.0025 | 0.7847 | 0.7843 | 0.0505 | 0.0585 |
| 0.1616 | 56.0 | 14000 | 0.5920 | 0.788 | 0.3041 | 2.0174 | 0.788 | 0.7878 | 0.0520 | 0.0591 |
| 0.1616 | 57.0 | 14250 | 0.5927 | 0.7863 | 0.3033 | 2.0321 | 0.7863 | 0.7858 | 0.0521 | 0.0588 |
| 0.1592 | 58.0 | 14500 | 0.5878 | 0.787 | 0.3017 | 1.9751 | 0.787 | 0.7874 | 0.0461 | 0.0584 |
| 0.1592 | 59.0 | 14750 | 0.5888 | 0.7867 | 0.3030 | 1.9996 | 0.7868 | 0.7864 | 0.0494 | 0.0582 |
| 0.1585 | 60.0 | 15000 | 0.5929 | 0.786 | 0.3052 | 2.0237 | 0.786 | 0.7857 | 0.0512 | 0.0584 |
| 0.1585 | 61.0 | 15250 | 0.5894 | 0.7865 | 0.3026 | 1.9895 | 0.7865 | 0.7864 | 0.0548 | 0.0585 |
| 0.1562 | 62.0 | 15500 | 0.5903 | 0.7873 | 0.3033 | 1.9670 | 0.7873 | 0.7870 | 0.0481 | 0.0584 |
| 0.1562 | 63.0 | 15750 | 0.5896 | 0.7853 | 0.3023 | 1.9681 | 0.7853 | 0.7850 | 0.0520 | 0.0587 |
| 0.1548 | 64.0 | 16000 | 0.5903 | 0.7847 | 0.3027 | 1.9865 | 0.7847 | 0.7846 | 0.0506 | 0.0587 |
| 0.1548 | 65.0 | 16250 | 0.5910 | 0.7853 | 0.3039 | 2.0009 | 0.7853 | 0.7849 | 0.0515 | 0.0593 |
| 0.1537 | 66.0 | 16500 | 0.5866 | 0.7883 | 0.3012 | 1.9561 | 0.7883 | 0.7881 | 0.0447 | 0.0581 |
| 0.1537 | 67.0 | 16750 | 0.5858 | 0.7867 | 0.3009 | 1.9868 | 0.7868 | 0.7861 | 0.0486 | 0.0577 |
| 0.1526 | 68.0 | 17000 | 0.5886 | 0.7867 | 0.3024 | 2.0009 | 0.7868 | 0.7862 | 0.0530 | 0.0587 |
| 0.1526 | 69.0 | 17250 | 0.5850 | 0.7863 | 0.3010 | 2.0095 | 0.7863 | 0.7860 | 0.0510 | 0.0581 |
| 0.1508 | 70.0 | 17500 | 0.5867 | 0.7865 | 0.3019 | 2.0304 | 0.7865 | 0.7861 | 0.0525 | 0.0583 |
| 0.1508 | 71.0 | 17750 | 0.5895 | 0.7857 | 0.3038 | 2.0013 | 0.7857 | 0.7853 | 0.0478 | 0.0586 |
| 0.15 | 72.0 | 18000 | 0.5894 | 0.7847 | 0.3025 | 2.0051 | 0.7847 | 0.7845 | 0.0500 | 0.0586 |
| 0.15 | 73.0 | 18250 | 0.5867 | 0.7865 | 0.3022 | 1.9634 | 0.7865 | 0.7860 | 0.0489 | 0.0582 |
| 0.149 | 74.0 | 18500 | 0.5888 | 0.7857 | 0.3026 | 1.9817 | 0.7857 | 0.7851 | 0.0497 | 0.0584 |
| 0.149 | 75.0 | 18750 | 0.5823 | 0.7885 | 0.2994 | 1.9873 | 0.7885 | 0.7880 | 0.0476 | 0.0577 |
| 0.1483 | 76.0 | 19000 | 0.5866 | 0.7853 | 0.3025 | 1.9870 | 0.7853 | 0.7849 | 0.0531 | 0.0583 |
| 0.1483 | 77.0 | 19250 | 0.5866 | 0.7867 | 0.3013 | 1.9933 | 0.7868 | 0.7862 | 0.0498 | 0.0577 |
| 0.1478 | 78.0 | 19500 | 0.5844 | 0.787 | 0.3010 | 1.9793 | 0.787 | 0.7868 | 0.0465 | 0.0579 |
| 0.1478 | 79.0 | 19750 | 0.5850 | 0.7857 | 0.3005 | 1.9856 | 0.7857 | 0.7855 | 0.0489 | 0.0580 |
| 0.1463 | 80.0 | 20000 | 0.5829 | 0.7893 | 0.2999 | 2.0003 | 0.7893 | 0.7890 | 0.0543 | 0.0578 |
| 0.1463 | 81.0 | 20250 | 0.5845 | 0.7867 | 0.3011 | 2.0178 | 0.7868 | 0.7864 | 0.0494 | 0.0580 |
| 0.1457 | 82.0 | 20500 | 0.5878 | 0.7865 | 0.3022 | 2.0108 | 0.7865 | 0.7861 | 0.0507 | 0.0583 |
| 0.1457 | 83.0 | 20750 | 0.5862 | 0.7865 | 0.3016 | 1.9996 | 0.7865 | 0.7865 | 0.0505 | 0.0585 |
| 0.1452 | 84.0 | 21000 | 0.5851 | 0.7863 | 0.3011 | 2.0002 | 0.7863 | 0.7859 | 0.0481 | 0.0582 |
| 0.1452 | 85.0 | 21250 | 0.5850 | 0.787 | 0.3013 | 1.9659 | 0.787 | 0.7867 | 0.0524 | 0.0582 |
| 0.1449 | 86.0 | 21500 | 0.5878 | 0.7867 | 0.3023 | 1.9837 | 0.7868 | 0.7866 | 0.0526 | 0.0581 |
| 0.1449 | 87.0 | 21750 | 0.5844 | 0.7873 | 0.3010 | 1.9807 | 0.7873 | 0.7865 | 0.0522 | 0.0577 |
| 0.1437 | 88.0 | 22000 | 0.5846 | 0.7877 | 0.3012 | 1.9947 | 0.7877 | 0.7869 | 0.0464 | 0.0580 |
| 0.1437 | 89.0 | 22250 | 0.5859 | 0.787 | 0.3016 | 2.0002 | 0.787 | 0.7867 | 0.0503 | 0.0581 |
| 0.143 | 90.0 | 22500 | 0.5838 | 0.7865 | 0.3010 | 1.9996 | 0.7865 | 0.7859 | 0.0496 | 0.0576 |
| 0.143 | 91.0 | 22750 | 0.5843 | 0.7837 | 0.3011 | 1.9683 | 0.7837 | 0.7834 | 0.0501 | 0.0583 |
| 0.1426 | 92.0 | 23000 | 0.5843 | 0.7873 | 0.3010 | 1.9960 | 0.7873 | 0.7870 | 0.0524 | 0.0578 |
| 0.1426 | 93.0 | 23250 | 0.5827 | 0.7847 | 0.3005 | 1.9719 | 0.7847 | 0.7844 | 0.0506 | 0.0579 |
| 0.1428 | 94.0 | 23500 | 0.5831 | 0.7865 | 0.3009 | 1.9781 | 0.7865 | 0.7862 | 0.0517 | 0.0579 |
| 0.1428 | 95.0 | 23750 | 0.5821 | 0.784 | 0.3001 | 1.9641 | 0.7840 | 0.7838 | 0.0505 | 0.0579 |
| 0.1424 | 96.0 | 24000 | 0.5850 | 0.7845 | 0.3020 | 1.9667 | 0.7845 | 0.7842 | 0.0526 | 0.0584 |
| 0.1424 | 97.0 | 24250 | 0.5850 | 0.7847 | 0.3012 | 1.9776 | 0.7847 | 0.7844 | 0.0508 | 0.0579 |
| 0.142 | 98.0 | 24500 | 0.5845 | 0.7877 | 0.3011 | 1.9745 | 0.7877 | 0.7870 | 0.0491 | 0.0579 |
| 0.142 | 99.0 | 24750 | 0.5834 | 0.7853 | 0.3010 | 1.9679 | 0.7853 | 0.7852 | 0.0506 | 0.0581 |
| 0.1416 | 100.0 | 25000 | 0.5837 | 0.7867 | 0.3013 | 1.9882 | 0.7868 | 0.7860 | 0.0529 | 0.0581 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
LarryAIDraw/ijichi_seika-10
|
LarryAIDraw
| 2023-10-04T01:01:15Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-04T00:56:27Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/154638/ijichi-seika-bocchi-the-rock-lora
|
LarryAIDraw/Hwahjah_v1
|
LarryAIDraw
| 2023-10-04T00:54:00Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-04T00:50:01Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/155533/hwah-jah-azur-lane
|
LarryAIDraw/ShinanoSummerFluffV1
|
LarryAIDraw
| 2023-10-04T00:53:28Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-04T00:49:09Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/155698/shinano-greaterdreamy-white-sandsless-or-swimsuit-skin-or-azur-lane
|
LarryAIDraw/topaz_hsr
|
LarryAIDraw
| 2023-10-04T00:53:11Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-04T00:48:39Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/156239/topaz-honkai-star-rail
|
CrypticOwl5503/sd-class-butterflies-32
|
CrypticOwl5503
| 2023-10-04T00:45:22Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-10-04T00:45:13Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('CrypticOwl5503/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
adamo1139/PS_AD_O365_Mistral_7B_02_GGUF
|
adamo1139
| 2023-10-04T00:20:20Z | 0 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-10-03T21:41:30Z |
---
license: apache-2.0
---
GGUF q4_k_m quantization of QLoRA on semi-synthetic dataset with knowledge about PowerShell, Office 365 and Active Directory.
Prompt format
Pupil:{question}Teacher:{Response}
|
SinanAkkoyun/TinyLlama-1.1B-Chat-v0.3-exl2
|
SinanAkkoyun
| 2023-10-04T00:04:06Z | 1 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-10-04T00:01:33Z |
---
license: apache-2.0
---
This is an EXL2 quantization of [PY007/TinyLlama-1.1B-Chat-v0.3](https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.3)
[3.0bpw](https://huggingface.co/SinanAkkoyun/TinyLlama-1.1B-Chat-v0.3-exl2/tree/3.0bpw)
Check out the awesome [Exllama V2 repo](https://github.com/turboderp/exllamav2)
|
duytintruong/dqn-SpaceInvadersNoFrameskip-v4
|
duytintruong
| 2023-10-03T23:49:21Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T23:48:45Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 692.50 +/- 156.29
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga duytintruong -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga duytintruong -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga duytintruong
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Rtrggf/Type-A
|
Rtrggf
| 2023-10-03T23:47:55Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"art",
"text-to-video",
"ae",
"dataset:fka/awesome-chatgpt-prompts",
"license:openrail",
"region:us"
] |
text-to-video
| 2023-10-03T23:35:04Z |
---
license: openrail
datasets:
- fka/awesome-chatgpt-prompts
language:
- ae
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: text-to-video
tags:
- art
---
using System;
using System.Drawing;
using AForge.Video.FFMPEG;
class Program
{
static void Main()
{
var videoWriter = new VideoFileWriter();
videoWriter.Open("output.mp4", 640, 480, 25, VideoCodec.MPEG4);
for (int i = 0; i < 10; i++)
{
Bitmap image = (Bitmap)Image.FromFile($"image{i}.jpg");
videoWriter.WriteVideoFrame(image);
image.Dispose();
}
videoWriter.Close();
}
}
|
codyreading/dreambooth-bear-lawn2
|
codyreading
| 2023-10-03T23:42:12Z | 28 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-03T23:34:35Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: A photo of sks stuffed animal
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - codyreading/dreambooth-bear-lawn2
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on A photo of sks stuffed animal using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Joshua-shadow/kjkj
|
Joshua-shadow
| 2023-10-03T23:40:04Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2023-10-03T23:36:52Z |
---
license: other
license_name: me
license_link: LICENSE
---
|
bdpc/resnet101_rvl-cdip-cnn_rvl_cdip-NK1000_kd_CEKD_t1.0_a1.0
|
bdpc
| 2023-10-03T23:23:57Z | 199 | 0 |
transformers
|
[
"transformers",
"pytorch",
"resnet",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-03T19:12:41Z |
---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet101_rvl-cdip-cnn_rvl_cdip-NK1000_kd_CEKD_t1.0_a1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101_rvl-cdip-cnn_rvl_cdip-NK1000_kd_CEKD_t1.0_a1.0
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7844
- Accuracy: 0.742
- Brier Loss: 0.4405
- Nll: 2.8680
- F1 Micro: 0.7420
- F1 Macro: 0.7411
- Ece: 0.1946
- Aurc: 0.1002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 250 | 2.7345 | 0.153 | 0.9327 | 8.3371 | 0.153 | 0.1246 | 0.0866 | 0.7933 |
| 2.6983 | 2.0 | 500 | 2.4500 | 0.4213 | 0.8816 | 4.7062 | 0.4213 | 0.3924 | 0.3073 | 0.4444 |
| 2.6983 | 3.0 | 750 | 1.7959 | 0.5012 | 0.7003 | 3.3576 | 0.5012 | 0.4758 | 0.1869 | 0.3051 |
| 1.7341 | 4.0 | 1000 | 1.3637 | 0.5985 | 0.5511 | 2.8818 | 0.5985 | 0.5868 | 0.1005 | 0.1935 |
| 1.7341 | 5.0 | 1250 | 1.1978 | 0.6498 | 0.4862 | 2.7546 | 0.6498 | 0.6471 | 0.0826 | 0.1500 |
| 1.0818 | 6.0 | 1500 | 1.0812 | 0.6853 | 0.4364 | 2.6325 | 0.6853 | 0.6845 | 0.0522 | 0.1217 |
| 1.0818 | 7.0 | 1750 | 1.0276 | 0.7013 | 0.4149 | 2.5542 | 0.7013 | 0.7003 | 0.0397 | 0.1108 |
| 0.7498 | 8.0 | 2000 | 0.9724 | 0.7133 | 0.3944 | 2.4773 | 0.7133 | 0.7129 | 0.0505 | 0.1040 |
| 0.7498 | 9.0 | 2250 | 0.9777 | 0.7248 | 0.3924 | 2.4916 | 0.7248 | 0.7242 | 0.0628 | 0.0992 |
| 0.5034 | 10.0 | 2500 | 1.0027 | 0.724 | 0.3976 | 2.4974 | 0.724 | 0.7250 | 0.0751 | 0.1032 |
| 0.5034 | 11.0 | 2750 | 0.9979 | 0.729 | 0.3913 | 2.5344 | 0.729 | 0.7295 | 0.0805 | 0.0988 |
| 0.3237 | 12.0 | 3000 | 1.0553 | 0.7192 | 0.4075 | 2.6242 | 0.7192 | 0.7193 | 0.0963 | 0.1072 |
| 0.3237 | 13.0 | 3250 | 1.1162 | 0.7175 | 0.4139 | 2.6543 | 0.7175 | 0.7185 | 0.1295 | 0.1093 |
| 0.2023 | 14.0 | 3500 | 1.1259 | 0.725 | 0.4140 | 2.6758 | 0.7250 | 0.7246 | 0.1237 | 0.1055 |
| 0.2023 | 15.0 | 3750 | 1.2728 | 0.7115 | 0.4381 | 2.8308 | 0.7115 | 0.7147 | 0.1464 | 0.1168 |
| 0.1264 | 16.0 | 4000 | 1.2664 | 0.7222 | 0.4296 | 2.8434 | 0.7223 | 0.7236 | 0.1523 | 0.1107 |
| 0.1264 | 17.0 | 4250 | 1.2620 | 0.724 | 0.4252 | 2.7990 | 0.724 | 0.7252 | 0.1563 | 0.1066 |
| 0.0802 | 18.0 | 4500 | 1.3362 | 0.727 | 0.4293 | 2.8642 | 0.7270 | 0.7267 | 0.1653 | 0.1090 |
| 0.0802 | 19.0 | 4750 | 1.3608 | 0.7302 | 0.4288 | 2.7893 | 0.7302 | 0.7318 | 0.1637 | 0.1059 |
| 0.0553 | 20.0 | 5000 | 1.3757 | 0.7308 | 0.4303 | 2.8861 | 0.7308 | 0.7300 | 0.1670 | 0.1073 |
| 0.0553 | 21.0 | 5250 | 1.4947 | 0.7295 | 0.4420 | 2.8306 | 0.7295 | 0.7300 | 0.1770 | 0.1128 |
| 0.0329 | 22.0 | 5500 | 1.5338 | 0.7265 | 0.4416 | 2.8729 | 0.7265 | 0.7273 | 0.1808 | 0.1097 |
| 0.0329 | 23.0 | 5750 | 1.5127 | 0.7355 | 0.4362 | 2.8574 | 0.7355 | 0.7366 | 0.1774 | 0.1045 |
| 0.0258 | 24.0 | 6000 | 1.5189 | 0.7352 | 0.4360 | 2.8435 | 0.7353 | 0.7344 | 0.1784 | 0.1030 |
| 0.0258 | 25.0 | 6250 | 1.5802 | 0.7362 | 0.4404 | 2.8399 | 0.7362 | 0.7362 | 0.1847 | 0.1013 |
| 0.0193 | 26.0 | 6500 | 1.5869 | 0.737 | 0.4378 | 2.8237 | 0.737 | 0.7362 | 0.1846 | 0.1022 |
| 0.0193 | 27.0 | 6750 | 1.6160 | 0.7365 | 0.4373 | 2.7928 | 0.7365 | 0.7360 | 0.1864 | 0.1049 |
| 0.014 | 28.0 | 7000 | 1.6775 | 0.7372 | 0.4426 | 2.9236 | 0.7372 | 0.7373 | 0.1909 | 0.1039 |
| 0.014 | 29.0 | 7250 | 1.6391 | 0.736 | 0.4370 | 2.8717 | 0.736 | 0.7358 | 0.1905 | 0.0999 |
| 0.0132 | 30.0 | 7500 | 1.6804 | 0.7355 | 0.4434 | 2.8397 | 0.7355 | 0.7360 | 0.1903 | 0.1067 |
| 0.0132 | 31.0 | 7750 | 1.6809 | 0.738 | 0.4386 | 2.8853 | 0.738 | 0.7387 | 0.1920 | 0.1015 |
| 0.0121 | 32.0 | 8000 | 1.6953 | 0.734 | 0.4443 | 2.8451 | 0.734 | 0.7342 | 0.1961 | 0.1013 |
| 0.0121 | 33.0 | 8250 | 1.7184 | 0.7425 | 0.4344 | 2.8180 | 0.7425 | 0.7423 | 0.1910 | 0.1014 |
| 0.0098 | 34.0 | 8500 | 1.7151 | 0.735 | 0.4445 | 2.8532 | 0.735 | 0.7337 | 0.1952 | 0.1000 |
| 0.0098 | 35.0 | 8750 | 1.7781 | 0.7338 | 0.4484 | 2.8133 | 0.7338 | 0.7351 | 0.1999 | 0.1052 |
| 0.0086 | 36.0 | 9000 | 1.7540 | 0.7372 | 0.4443 | 2.8388 | 0.7372 | 0.7388 | 0.1954 | 0.1039 |
| 0.0086 | 37.0 | 9250 | 1.7744 | 0.738 | 0.4474 | 2.8600 | 0.738 | 0.7390 | 0.1953 | 0.1057 |
| 0.0079 | 38.0 | 9500 | 1.7446 | 0.7368 | 0.4417 | 2.8485 | 0.7367 | 0.7374 | 0.1972 | 0.1016 |
| 0.0079 | 39.0 | 9750 | 1.7700 | 0.739 | 0.4398 | 2.8826 | 0.739 | 0.7395 | 0.1970 | 0.1023 |
| 0.0076 | 40.0 | 10000 | 1.7896 | 0.7368 | 0.4442 | 2.8449 | 0.7367 | 0.7376 | 0.1988 | 0.1033 |
| 0.0076 | 41.0 | 10250 | 1.7435 | 0.7402 | 0.4387 | 2.8390 | 0.7402 | 0.7405 | 0.1926 | 0.1031 |
| 0.0074 | 42.0 | 10500 | 1.7837 | 0.7338 | 0.4470 | 2.8191 | 0.7338 | 0.7339 | 0.2018 | 0.1035 |
| 0.0074 | 43.0 | 10750 | 1.8015 | 0.7392 | 0.4427 | 2.8093 | 0.7392 | 0.7401 | 0.1981 | 0.1017 |
| 0.0061 | 44.0 | 11000 | 1.8155 | 0.739 | 0.4449 | 2.8333 | 0.739 | 0.7406 | 0.1983 | 0.1022 |
| 0.0061 | 45.0 | 11250 | 1.7958 | 0.7392 | 0.4426 | 2.8161 | 0.7392 | 0.7385 | 0.1963 | 0.1039 |
| 0.0059 | 46.0 | 11500 | 1.8089 | 0.7422 | 0.4411 | 2.8174 | 0.7422 | 0.7422 | 0.1955 | 0.1011 |
| 0.0059 | 47.0 | 11750 | 1.8125 | 0.743 | 0.4386 | 2.8184 | 0.743 | 0.7435 | 0.1939 | 0.1012 |
| 0.0053 | 48.0 | 12000 | 1.8004 | 0.7372 | 0.4432 | 2.8413 | 0.7372 | 0.7371 | 0.1995 | 0.1023 |
| 0.0053 | 49.0 | 12250 | 1.8075 | 0.7405 | 0.4392 | 2.8569 | 0.7405 | 0.7397 | 0.1962 | 0.1015 |
| 0.0055 | 50.0 | 12500 | 1.7844 | 0.742 | 0.4405 | 2.8680 | 0.7420 | 0.7411 | 0.1946 | 0.1002 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
gyaan/ppo-Huggy
|
gyaan
| 2023-10-03T23:21:40Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-10-03T23:21:35Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: gyaan/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
zkdeng/swin-tiny-patch4-window7-224-finetuned-eurosat
|
zkdeng
| 2023-10-03T23:18:32Z | 213 | 0 |
transformers
|
[
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-03T22:41:35Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2188
- Accuracy: 0.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6077 | 0.96 | 12 | 0.3408 | 0.895 |
| 0.3469 | 2.0 | 25 | 0.2188 | 0.92 |
| 0.2627 | 2.88 | 36 | 0.2183 | 0.915 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
jonale/q-FrozenLake-v1-4x4-noSlippery
|
jonale
| 2023-10-03T23:15:51Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T23:15:48Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jonale/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sd-concepts-library/ahx-beta-51c9767
|
sd-concepts-library
| 2023-10-03T23:15:13Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-10-03T23:15:10Z |
---
license: mit
---
### ahx-beta-51c9767 on Stable Diffusion
This is the `<ahx-beta-51c9767>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:








|
Adrian101-hnd/q-FrozenLake-v1-4x4-noSlippery
|
Adrian101-hnd
| 2023-10-03T22:51:51Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T22:51:48Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Adrian101-hnd/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
YuvanKumar/ppo-LunarLander-v2
|
YuvanKumar
| 2023-10-03T22:49:01Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T22:48:43Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.16 +/- 13.38
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ipipan/nlpre_spacy_nkjp_pl-core-news-lg_nkjp-by-name
|
ipipan
| 2023-10-03T22:36:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-29T17:04:31Z |
# Model description
- Morphosyntactic analyzer: Spacy
- Tagset: NKJP
- Embedding vectors: pl-core-news-lg
- Dataset: NLPrePL-NKJP-fair-by-name (https://huggingface.co/datasets/ipipan/nlprepl)
# How to use
## Clone
```
git clone [email protected]:ipipan/nlpre_spacy_nkjp_pl-core-news-lg_nkjp-by-name
```
## Load model
```
import spacy
model_name = './nlpre_spacy_nkjp_pl-core-news-lg_nkjp-by-name'
model_upos = spacy.load(model_name + '/upos')
model_xpos = spacy.load(model_name + '/xpos+lemma')
```
|
ipipan/nlpre_spacy_ud_pl-core-news-lg_nkjp-by-name
|
ipipan
| 2023-10-03T22:33:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-29T16:38:32Z |
# Model description
- Morphosyntactic analyzer: Spacy
- Tagset: UD
- Embedding vectors: pl-core-news-lg
- Dataset: NLPrePL-NKJP-fair-by-name (https://huggingface.co/datasets/ipipan/nlprepl)
# How to use
## Clone
```
git clone [email protected]:ipipan/nlpre_spacy_ud_pl-core-news-lg_nkjp-by-name
```
## Load model
```
import spacy
model_name = './nlpre_spacy_ud_pl-core-news-lg_nkjp-by-name/model-best'
model = spacy.load(model_name)
```
|
ipipan/nlpre_spacy_ud_transformer_nkjp-by-type
|
ipipan
| 2023-10-03T22:33:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-29T15:51:30Z |
# Model description
- Morphosyntactic analyzer: Spacy
- Tagset: UD
- Embedding vectors: Transformer
- Dataset: NLPrePL-NKJP-fair-by-type (https://huggingface.co/datasets/ipipan/nlprepl)
# How to use
## Clone
```
git clone [email protected]:ipipan/nlpre_spacy_ud_transformer_nkjp-by-type
```
## Load model
```
import spacy
model_name = './nlpre_spacy_ud_transformer_nkjp-by-type/model-best'
model = spacy.load(model_name)
```
|
LoneStriker/airoboros-mistral2.2-7b-8.0bpw-h6-exl2
|
LoneStriker
| 2023-10-03T22:20:40Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"llama-2",
"instruct",
"finetune",
"alpaca",
"gpt4",
"synthetic data",
"distillation",
"en",
"dataset:jondurbin/airoboros-2.2.1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-03T20:35:02Z |
---
base_model: mistralai/Mistral-7b-V0.1
tags:
- llama-2
- instruct
- finetune
- alpaca
- gpt4
- synthetic data
- distillation
datasets:
- jondurbin/airoboros-2.2.1
model-index:
- name: airoboros2.2-mistral-7b
results: []
license: mit
language:
- en
---
Mistral trained with the airoboros dataset!

Actual dataset is airoboros 2.2, but it seems to have been replaced on hf with 2.2.1.
Prompt Format:
```
USER: <prompt>
ASSISTANT:
```
TruthfulQA:
```
hf-causal-experimental (pretrained=/home/teknium/dakota/lm-evaluation-harness/airoboros2.2-mistral/,dtype=float16), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.3562|± |0.0168|
| | |mc2 |0.5217|± |0.0156|
```
Wandb training charts: https://wandb.ai/teknium1/airoboros-mistral-7b/runs/airoboros-mistral-1?workspace=user-teknium1
More info to come
|
ipipan/nlpre_combo_ud_herBERT_nkjp-by-name
|
ipipan
| 2023-10-03T22:16:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-29T15:35:38Z |
# Model description
- Morphosyntactic analyzer: COMBO
- Tagset: UD
- Embedding vectors: HerBERT (wiki)
- Dataset: NLPrePL-NKJP-fair-by-name (https://huggingface.co/datasets/ipipan/nlprepl)
# How to use
## Clone
```
git clone [email protected]:ipipan/nlpre_combo_ud_herBERT_nkjp-by-name
```
## Load model
```
from combo.predict import COMBO
model_name = './nlpre_combo_ud_herBERT_nkjp-by-name'
model = COMBO.from_pretrained(model_name)
```
|
ipipan/nlpre_combo_ud_herBERT_nkjp-by-type
|
ipipan
| 2023-10-03T22:15:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-29T15:35:27Z |
# Model description
- Morphosyntactic analyzer: COMBO
- Tagset: UD
- Embedding vectors: HerBERT (wiki)
- Dataset: NLPrePL-NKJP-fair-by-type (https://huggingface.co/datasets/ipipan/nlprepl)
# How to use
## Clone
```
git clone [email protected]:ipipan/nlpre_combo_ud_herBERT_nkjp-by-type
```
## Load model
```
from combo.predict import COMBO
model_name = './nlpre_combo_ud_herBERT_nkjp-by-type'
model = COMBO.from_pretrained(model_name)
```
|
ipipan/nlpre_combo_nkjp_herBERT_nkjp-by-type
|
ipipan
| 2023-10-03T22:15:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-29T15:34:36Z |
# Model description
- Morphosyntactic analyzer: COMBO
- Tagset: NKJP
- Embedding vectors: HerBERT (wiki)
- Dataset: NLPrePL-NKJP-fair-by-type (https://huggingface.co/datasets/ipipan/nlprepl)
# How to use
## Clone
```
git clone [email protected]:ipipan/nlpre_combo_nkjp_herBERT_nkjp-by-type
```
## Load model
```
from combo.predict import COMBO
model_name = './nlpre_combo_nkjp_herBERT_nkjp-by-type'
model = COMBO.from_pretrained(model_name)
```
|
ipipan/nlpre_combo_nkjp_herBERT_nkjp-by-name
|
ipipan
| 2023-10-03T22:14:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-29T15:34:23Z |
# Model description
- Morphosyntactic analyzer: COMBO
- Tagset: NKJP
- Embedding vectors: HerBERT (wiki)
- Dataset: NLPrePL-NKJP-fair-by-name (https://huggingface.co/datasets/ipipan/nlprepl)
# How to use
## Clone
```
git clone [email protected]:ipipan/nlpre_combo_nkjp_herBERT_nkjp-by-name
```
## Load model
```
from combo.predict import COMBO
model_name = './nlpre_combo_nkjp_herBERT_nkjp-by-name'
model = COMBO.from_pretrained(model_name)
```
|
ipipan/nlpre_combo_nkjp_fasttext_nkjp-by-type
|
ipipan
| 2023-10-03T22:13:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-29T14:40:21Z |
# Model description
- Morphosyntactic analyzer: COMBO
- Tagset: NKJP
- Embedding vectors: Fasttext (wiki)
- Dataset: NLPrePL-NKJP-fair-by-type (https://huggingface.co/datasets/ipipan/nlprepl)
# How to use
## Clone
```
git clone [email protected]:ipipan/nlpre_combo_nkjp_fasttext_nkjp-by-type
```
## Load model
```
from combo.predict import COMBO
model_name = './nlpre_combo_nkjp_fasttext_nkjp-by-type'
model = COMBO.from_pretrained(model_name)
```
|
ipipan/nlpre_combo_ud_fasttext_nkjp-by-name
|
ipipan
| 2023-10-03T22:12:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-29T14:39:43Z |
# Model description
- Morphosyntactic analyzer: COMBO
- Tagset: UD
- Embedding vectors: Fasttext (wiki)
- Dataset: NLPrePL-NKJP-fair-by-name (https://huggingface.co/datasets/ipipan/nlprepl)
# How to use
## Clone
```
git clone [email protected]:ipipan/nlpre_combo_ud_fasttext_nkjp-by-name
```
## Load model
```
from combo.predict import COMBO
model_name = './nlpre_combo_ud_fasttext_nkjp-by-name'
model = COMBO.from_pretrained(model_name)
```
|
ipipan/nlpre_combo_ud_fasttext_nkjp-by-type
|
ipipan
| 2023-10-03T22:12:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-29T14:39:23Z |
# Model description
- Morphosyntactic analyzer: COMBO
- Tagset: UD
- Embedding vectors: Fasttext (wiki)
- Dataset: NLPrePL-NKJP-fair-by-type (https://huggingface.co/datasets/ipipan/nlprepl)
# How to use
## Clone
```
git clone [email protected]:ipipan/nlpre_combo_ud_fasttext_nkjp-by-type
```
## Load model
```
from combo.predict import COMBO
model_name = './nlpre_combo_ud_fasttext_nkjp-by-type'
model = COMBO.from_pretrained(model_name)
```
|
ipipan/nlpre_trankit_nkjp_xlm-roberta-base_nkjp-by-type
|
ipipan
| 2023-10-03T22:05:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-29T09:08:44Z |
# Model description
- Morphosyntactic analyzer: Trankit
- Tagset: NKJP
- Embedding vectors: XLM-RoBERTa-Base
- Dataset: NLPrePL-NKJP-fair-by-type (https://huggingface.co/datasets/ipipan/nlprepl)
# How to use
## Clone
```
git clone [email protected]:ipipan/nlpre_trankit_nkjp_xlm-roberta-base_nkjp-by-type
```
## Load model
```
import trankit
model_path = './nlpre_trankit_nkjp_xlm-roberta-base_nkjp-by-type'
trankit.verify_customized_pipeline(
category='customized-mwt', # pipeline category
save_dir=model_path, # directory used for saving models in previous steps
embedding_name='xlm-roberta-base' # embedding version that we use for training our customized pipeline, by default, it is `xlm-roberta-base`
)
model = trankit.Pipeline(lang='customized-mwt', cache_dir=model_path, embedding='xlm-roberta-base')
```
|
ipipan/nlpre_trankit_nkjp_xlm-roberta-base_nkjp-by-name
|
ipipan
| 2023-10-03T22:05:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-29T09:08:28Z |
# Model description
- Morphosyntactic analyzer: Trankit
- Tagset: NKJP
- Embedding vectors: XLM-RoBERTa-Base
- Dataset: NLPrePL-NKJP-fair-by-name (https://huggingface.co/datasets/ipipan/nlprepl)
# How to use
## Clone
```
git clone [email protected]:ipipan/nlpre_trankit_nkjp_xlm-roberta-base_nkjp-by-name
```
## Load model
```
import trankit
model_path = './nlpre_trankit_nkjp_xlm-roberta-base_nkjp-by-name'
trankit.verify_customized_pipeline(
category='customized-mwt', # pipeline category
save_dir=model_path, # directory used for saving models in previous steps
embedding_name='xlm-roberta-base' # embedding version that we use for training our customized pipeline, by default, it is `xlm-roberta-base`
)
model = trankit.Pipeline(lang='customized-mwt', cache_dir=model_path, embedding='xlm-roberta-base')
```
|
SkunkworksAI/Mistralic-7B-1
|
SkunkworksAI
| 2023-10-03T22:04:49Z | 52 | 26 |
transformers
|
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | 2023-09-29T23:45:09Z |
<p><h1> 🦾 Mistralic-7B-1 🦾 </h1></p>
Special thanks to Together Compute for sponsoring Skunkworks with compute!
**INFERENCE**
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device('cuda')
system_prompt = "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n"
system_no_input_prompt = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n"
def generate_prompt(instruction, input=None):
if input:
prompt = f"### System:\n{system_prompt}\n\n"
else:
prompt = f"### System:\n{system_no_input_prompt}\n\n"
prompt += f"### Instruction:\n{instruction}\n\n"
if input:
prompt += f"### Input:\n{input}\n\n"
return prompt + """### Response:\n"""
device = "cuda"
model = AutoModelForCausalLM.from_pretrained("SkunkworksAI/Mistralic-7B-1")
tokenizer = AutoTokenizer.from_pretrained("SkunkworksAI/Mistralic-7B-1")
while True:
instruction = input("Enter Instruction: ")
instruction = generate_prompt(instruction)
inputs = tokenizer(instruction, return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=1000, do_sample=True, temperature=0.01, use_cache=True, eos_token_id=tokenizer.eos_token_id)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
**EVALUATION**

Average: 0.72157
For comparison:
mistralai/Mistral-7B-v0.1 scores 0.7116
mistralai/Mistral-7B-Instruct-v0.1 scores 0.6794
|
ipipan/nlpre_trankit_ud_xlm-roberta-base_nkjp-by-name
|
ipipan
| 2023-10-03T22:04:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-29T09:00:18Z |
# Model description
- Morphosyntactic analyzer: Trankit
- Tagset: UD
- Embedding vectors: XLM-RoBERTa-Base
- Dataset: NLPrePL-NKJP-fair-by-name (https://huggingface.co/datasets/ipipan/nlprepl)
# How to use
## Clone
```
git clone [email protected]:ipipan/nlpre_trankit_ud_xlm-roberta-base_nkjp-by-name
```
## Load model
```
import trankit
model_path = './nlpre_trankit_ud_xlm-roberta-base_nkjp-by-name'
trankit.verify_customized_pipeline(
category='customized-mwt', # pipeline category
save_dir=model_path, # directory used for saving models in previous steps
embedding_name='xlm-roberta-base' # embedding version that we use for training our customized pipeline, by default, it is `xlm-roberta-base`
)
model = trankit.Pipeline(lang='customized-mwt', cache_dir=model_path, embedding='xlm-roberta-base')
```
|
LoneStriker/airoboros-mistral2.2-7b-6.0bpw-h6-exl2
|
LoneStriker
| 2023-10-03T22:02:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"llama-2",
"instruct",
"finetune",
"alpaca",
"gpt4",
"synthetic data",
"distillation",
"en",
"dataset:jondurbin/airoboros-2.2.1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-03T20:34:58Z |
---
base_model: mistralai/Mistral-7b-V0.1
tags:
- llama-2
- instruct
- finetune
- alpaca
- gpt4
- synthetic data
- distillation
datasets:
- jondurbin/airoboros-2.2.1
model-index:
- name: airoboros2.2-mistral-7b
results: []
license: mit
language:
- en
---
Mistral trained with the airoboros dataset!

Actual dataset is airoboros 2.2, but it seems to have been replaced on hf with 2.2.1.
Prompt Format:
```
USER: <prompt>
ASSISTANT:
```
TruthfulQA:
```
hf-causal-experimental (pretrained=/home/teknium/dakota/lm-evaluation-harness/airoboros2.2-mistral/,dtype=float16), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.3562|± |0.0168|
| | |mc2 |0.5217|± |0.0156|
```
Wandb training charts: https://wandb.ai/teknium1/airoboros-mistral-7b/runs/airoboros-mistral-1?workspace=user-teknium1
More info to come
|
NAB1108/News_Recommendation
|
NAB1108
| 2023-10-03T22:00:57Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-10-03T14:22:20Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# NAB1108/News_Recommendation
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("NAB1108/News_Recommendation")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
fernandohf/model_en_pt
|
fernandohf
| 2023-10-03T21:52:04Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-10-03T20:15:33Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: model_en_pt
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
config: en-pt
split: train
args: en-pt
metrics:
- name: Bleu
type: bleu
value: 1.0547
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_en_pt
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1561
- Bleu: 1.0547
- Gen Len: 17.8369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 79 | 3.4923 | 1.2907 | 17.2624 |
| No log | 2.0 | 158 | 3.3099 | 0.8388 | 17.2482 |
| No log | 3.0 | 237 | 3.2192 | 0.8526 | 17.7021 |
| No log | 4.0 | 316 | 3.1709 | 0.9618 | 17.8298 |
| No log | 5.0 | 395 | 3.1561 | 1.0547 | 17.8369 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
ipipan/nlpre_trankit_ud_xlm-roberta-large_pdb
|
ipipan
| 2023-10-03T21:49:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-29T08:54:04Z |
# Model description
- Morphosyntactic analyzer: Trankit
- Tagset: UD
- Embedding vectors: XLM-RoBERTa-Large
- Dataset: PDB (http://git.nlp.ipipan.waw.pl/alina/PDBUD/tree/master/PDB-UD/PDB-UD)
# How to use
## Clone
```
git clone [email protected]:ipipan/nlpre_trankit_ud_xlm-roberta-large_pdb
```
## Load model
```
import trankit
model_path = './nlpre_trankit_ud_xlm-roberta-large_pdb'
trankit.verify_customized_pipeline(
category='customized-mwt', # pipeline category
save_dir=model_path, # directory used for saving models in previous steps
embedding_name='xlm-roberta-large' # embedding version that we use for training our customized pipeline, by default, it is `xlm-roberta-base`
)
model = trankit.Pipeline(lang='customized-mwt', cache_dir=model_path, embedding='xlm-roberta-large')
```
|
ipipan/nlpre_trankit_ud_xlm-roberta-large_nkjp-by-type
|
ipipan
| 2023-10-03T21:48:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-28T15:24:37Z |
# Model description
- Morphosyntactic analyzer: Trankit
- Tagset: UD
- Embedding vectors: XLM-RoBERTa-Large
- Dataset: NLPrePL-NKJP-fair-by-type (https://huggingface.co/datasets/ipipan/nlprepl)
# How to use
## Clone
```
git clone [email protected]:ipipan/nlpre_trankit_ud_xlm-roberta-large_nkjp-by-type
```
## Load model
```
import trankit
model_path = './nlpre_trankit_ud_xlm-roberta-large_nkjp-by-type'
trankit.verify_customized_pipeline(
category='customized-mwt', # pipeline category
save_dir=model_path, # directory used for saving models in previous steps
embedding_name='xlm-roberta-large' # embedding version that we use for training our customized pipeline, by default, it is `xlm-roberta-base`
)
model = trankit.Pipeline(lang='customized-mwt', cache_dir=model_path, embedding='xlm-roberta-large')
```
|
Safeer143/bert-finetuned-squad
|
Safeer143
| 2023-10-03T21:36:31Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-10-01T23:27:03Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
raulkw/kw-prop-classifier
|
raulkw
| 2023-10-03T21:35:50Z | 193 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-03T21:35:42Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: kw-prop-classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6266666650772095
---
# kw-prop-classifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Bathroom

#### Bedroom

#### Dining Room

#### House Amenities

#### House Facade

#### House Hallway

#### House Interior

#### House Lobby

#### Kitchen

#### Living Room

|
Chunwei94/souffle-ae
|
Chunwei94
| 2023-10-03T20:57:19Z | 0 | 0 | null |
[
"onnx",
"region:us"
] | null | 2023-10-02T17:37:00Z |
# For models with python bindings, please first run `pip install .` in the corresponding directory.
docker run -it --name=sirius_test --gpus all --privileged sunqianqi/sirius:mlsys_ae /bin/bash
|
EladAssia/poca-SoccerTwos
|
EladAssia
| 2023-10-03T20:32:53Z | 37 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-10-03T20:32:17Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: EladAssia/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ArtCad98/nlp_covid_tweets
|
ArtCad98
| 2023-10-03T20:32:20Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:finiteautomata/beto-emotion-analysis",
"base_model:finetune:finiteautomata/beto-emotion-analysis",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-12T14:42:25Z |
---
base_model: finiteautomata/beto-emotion-analysis
tags:
- generated_from_trainer
model-index:
- name: nlp_covid_tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nlp_covid_tweets
This model is a fine-tuned version of [finiteautomata/beto-emotion-analysis](https://huggingface.co/finiteautomata/beto-emotion-analysis) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1415
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.406 | 1.0 | 6705 | 1.3510 |
| 1.2393 | 2.0 | 13410 | 1.2051 |
| 1.1622 | 3.0 | 20115 | 1.1410 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
Terps/whisper-small-minds14-enUS
|
Terps
| 2023-10-03T20:05:58Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-10-03T19:07:18Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-small-minds14-enUS
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 28.158205430932703
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-minds14-enUS
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5829
- Wer Ortho: 27.8223
- Wer: 28.1582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.0003 | 17.86 | 500 | 0.5829 | 27.8223 | 28.1582 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
bdpc/resnet101-base_tobacco-cnn_tobacco3482_kd_CEKD_t2.5_a0.9
|
bdpc
| 2023-10-03T19:57:33Z | 195 | 0 |
transformers
|
[
"transformers",
"pytorch",
"resnet",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-03T19:42:19Z |
---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet101-base_tobacco-cnn_tobacco3482_kd_CEKD_t2.5_a0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101-base_tobacco-cnn_tobacco3482_kd_CEKD_t2.5_a0.9
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8672
- Accuracy: 0.71
- Brier Loss: 0.4047
- Nll: 2.1924
- F1 Micro: 0.7100
- F1 Macro: 0.6463
- Ece: 0.2420
- Aurc: 0.1050
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 13 | 2.1239 | 0.16 | 0.8967 | 8.4233 | 0.16 | 0.1062 | 0.2101 | 0.8304 |
| No log | 2.0 | 26 | 2.1201 | 0.14 | 0.8961 | 8.2220 | 0.14 | 0.0876 | 0.1970 | 0.8491 |
| No log | 3.0 | 39 | 2.0724 | 0.215 | 0.8865 | 6.2039 | 0.2150 | 0.1169 | 0.2432 | 0.7837 |
| No log | 4.0 | 52 | 2.0291 | 0.185 | 0.8773 | 5.6169 | 0.185 | 0.0792 | 0.2329 | 0.7651 |
| No log | 5.0 | 65 | 1.9592 | 0.215 | 0.8614 | 6.0237 | 0.2150 | 0.0835 | 0.2493 | 0.7373 |
| No log | 6.0 | 78 | 1.9039 | 0.205 | 0.8483 | 5.9575 | 0.205 | 0.0619 | 0.2493 | 0.7526 |
| No log | 7.0 | 91 | 1.8651 | 0.26 | 0.8381 | 5.6215 | 0.26 | 0.1490 | 0.2663 | 0.6747 |
| No log | 8.0 | 104 | 1.8342 | 0.225 | 0.8311 | 5.7631 | 0.225 | 0.1071 | 0.2425 | 0.6919 |
| No log | 9.0 | 117 | 1.8057 | 0.31 | 0.8218 | 5.2969 | 0.31 | 0.2118 | 0.2795 | 0.6489 |
| No log | 10.0 | 130 | 1.5737 | 0.46 | 0.7277 | 5.1748 | 0.46 | 0.2853 | 0.3279 | 0.2977 |
| No log | 11.0 | 143 | 1.5629 | 0.415 | 0.7331 | 4.8259 | 0.415 | 0.2846 | 0.2924 | 0.3880 |
| No log | 12.0 | 156 | 1.5283 | 0.45 | 0.7135 | 4.0012 | 0.45 | 0.3122 | 0.3298 | 0.3197 |
| No log | 13.0 | 169 | 1.4200 | 0.51 | 0.6674 | 3.9849 | 0.51 | 0.3400 | 0.3259 | 0.2549 |
| No log | 14.0 | 182 | 1.4334 | 0.535 | 0.6710 | 3.7006 | 0.535 | 0.3840 | 0.3291 | 0.2584 |
| No log | 15.0 | 195 | 1.4306 | 0.45 | 0.6854 | 3.8260 | 0.45 | 0.3120 | 0.3055 | 0.4297 |
| No log | 16.0 | 208 | 1.3175 | 0.585 | 0.6174 | 3.3484 | 0.585 | 0.4401 | 0.3406 | 0.1916 |
| No log | 17.0 | 221 | 1.2680 | 0.57 | 0.5998 | 3.1408 | 0.57 | 0.4356 | 0.2903 | 0.2136 |
| No log | 18.0 | 234 | 1.2605 | 0.58 | 0.6020 | 3.2085 | 0.58 | 0.4711 | 0.2915 | 0.2355 |
| No log | 19.0 | 247 | 1.2292 | 0.61 | 0.5791 | 3.0633 | 0.61 | 0.5021 | 0.2929 | 0.2082 |
| No log | 20.0 | 260 | 1.3872 | 0.54 | 0.6604 | 3.2778 | 0.54 | 0.4604 | 0.3284 | 0.3506 |
| No log | 21.0 | 273 | 1.1646 | 0.625 | 0.5520 | 2.8539 | 0.625 | 0.5193 | 0.2828 | 0.1885 |
| No log | 22.0 | 286 | 1.1565 | 0.655 | 0.5438 | 2.6915 | 0.655 | 0.5437 | 0.3430 | 0.1549 |
| No log | 23.0 | 299 | 1.1041 | 0.625 | 0.5298 | 2.9930 | 0.625 | 0.5241 | 0.2423 | 0.1906 |
| No log | 24.0 | 312 | 1.0448 | 0.685 | 0.4895 | 2.8196 | 0.685 | 0.5846 | 0.2701 | 0.1411 |
| No log | 25.0 | 325 | 1.0623 | 0.695 | 0.4904 | 2.6903 | 0.695 | 0.6086 | 0.2762 | 0.1435 |
| No log | 26.0 | 338 | 0.9872 | 0.695 | 0.4607 | 2.6336 | 0.695 | 0.5953 | 0.2728 | 0.1180 |
| No log | 27.0 | 351 | 0.9789 | 0.705 | 0.4580 | 2.6326 | 0.705 | 0.6127 | 0.2579 | 0.1171 |
| No log | 28.0 | 364 | 1.0033 | 0.685 | 0.4707 | 2.5747 | 0.685 | 0.5906 | 0.2747 | 0.1291 |
| No log | 29.0 | 377 | 1.0152 | 0.7 | 0.4789 | 2.4333 | 0.7 | 0.6260 | 0.2951 | 0.1739 |
| No log | 30.0 | 390 | 1.0107 | 0.715 | 0.4684 | 2.5194 | 0.715 | 0.6401 | 0.3197 | 0.1389 |
| No log | 31.0 | 403 | 0.9511 | 0.69 | 0.4445 | 2.5648 | 0.69 | 0.6131 | 0.2648 | 0.1298 |
| No log | 32.0 | 416 | 0.9586 | 0.735 | 0.4448 | 2.3342 | 0.735 | 0.6578 | 0.2941 | 0.1275 |
| No log | 33.0 | 429 | 1.0010 | 0.73 | 0.4625 | 2.4748 | 0.7300 | 0.6613 | 0.3307 | 0.1202 |
| No log | 34.0 | 442 | 0.9481 | 0.71 | 0.4361 | 2.4986 | 0.7100 | 0.6456 | 0.2856 | 0.1228 |
| No log | 35.0 | 455 | 0.9190 | 0.69 | 0.4323 | 2.6586 | 0.69 | 0.6265 | 0.2538 | 0.1250 |
| No log | 36.0 | 468 | 0.9226 | 0.715 | 0.4350 | 2.2652 | 0.715 | 0.6507 | 0.2868 | 0.1328 |
| No log | 37.0 | 481 | 0.9017 | 0.725 | 0.4182 | 2.5141 | 0.7250 | 0.6590 | 0.2547 | 0.1013 |
| No log | 38.0 | 494 | 0.9092 | 0.72 | 0.4218 | 2.5171 | 0.72 | 0.6495 | 0.2677 | 0.1055 |
| 1.0958 | 39.0 | 507 | 0.9093 | 0.71 | 0.4221 | 2.6479 | 0.7100 | 0.6456 | 0.2567 | 0.1185 |
| 1.0958 | 40.0 | 520 | 0.8926 | 0.71 | 0.4204 | 2.3785 | 0.7100 | 0.6522 | 0.2396 | 0.1153 |
| 1.0958 | 41.0 | 533 | 0.8928 | 0.715 | 0.4157 | 2.5719 | 0.715 | 0.6487 | 0.2708 | 0.1067 |
| 1.0958 | 42.0 | 546 | 0.8967 | 0.715 | 0.4247 | 2.6422 | 0.715 | 0.6495 | 0.2525 | 0.1174 |
| 1.0958 | 43.0 | 559 | 0.8773 | 0.695 | 0.4116 | 2.5548 | 0.695 | 0.6400 | 0.2491 | 0.1142 |
| 1.0958 | 44.0 | 572 | 0.8660 | 0.71 | 0.4036 | 2.2950 | 0.7100 | 0.6535 | 0.2401 | 0.1009 |
| 1.0958 | 45.0 | 585 | 0.8718 | 0.72 | 0.4057 | 2.4922 | 0.72 | 0.6551 | 0.2624 | 0.0998 |
| 1.0958 | 46.0 | 598 | 0.8737 | 0.7 | 0.4070 | 2.4455 | 0.7 | 0.6416 | 0.2360 | 0.1052 |
| 1.0958 | 47.0 | 611 | 0.8707 | 0.715 | 0.4094 | 2.3519 | 0.715 | 0.6494 | 0.2514 | 0.1086 |
| 1.0958 | 48.0 | 624 | 0.8640 | 0.705 | 0.4039 | 2.3765 | 0.705 | 0.6430 | 0.2538 | 0.1041 |
| 1.0958 | 49.0 | 637 | 0.8702 | 0.7 | 0.4066 | 2.5524 | 0.7 | 0.6423 | 0.2160 | 0.1080 |
| 1.0958 | 50.0 | 650 | 0.8672 | 0.71 | 0.4047 | 2.1924 | 0.7100 | 0.6463 | 0.2420 | 0.1050 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
AIHUBFRANCE/personnalites-mediatiques
|
AIHUBFRANCE
| 2023-10-03T19:51:11Z | 0 | 1 | null |
[
"fr",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2023-08-19T11:51:55Z |
---
license: cc-by-sa-4.0
language:
- fr
---
# Fichiers .zip de Personnalites Médiatiques
## Disponible sur weights.gg
## Ou dans le séction "Files and Versions"
|
Quacktab/Reinforce-Pixelcopter
|
Quacktab
| 2023-10-03T19:47:35Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T09:13:46Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 27.60 +/- 10.68
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
bdpc/resnet101-base_tobacco-cnn_tobacco3482_kd_CEKD_t2.5_a0.7
|
bdpc
| 2023-10-03T19:41:47Z | 195 | 0 |
transformers
|
[
"transformers",
"pytorch",
"resnet",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-03T19:23:34Z |
---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet101-base_tobacco-cnn_tobacco3482_kd_CEKD_t2.5_a0.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101-base_tobacco-cnn_tobacco3482_kd_CEKD_t2.5_a0.7
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8012
- Accuracy: 0.7
- Brier Loss: 0.4467
- Nll: 2.5682
- F1 Micro: 0.7
- F1 Macro: 0.6313
- Ece: 0.2684
- Aurc: 0.1170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 13 | 1.8024 | 0.16 | 0.8966 | 8.5001 | 0.16 | 0.1073 | 0.2079 | 0.8334 |
| No log | 2.0 | 26 | 1.7941 | 0.145 | 0.8957 | 8.3207 | 0.145 | 0.0843 | 0.2022 | 0.8435 |
| No log | 3.0 | 39 | 1.7486 | 0.2 | 0.8868 | 6.2015 | 0.2000 | 0.1007 | 0.2209 | 0.7900 |
| No log | 4.0 | 52 | 1.6854 | 0.205 | 0.8738 | 6.0142 | 0.205 | 0.0707 | 0.2453 | 0.7584 |
| No log | 5.0 | 65 | 1.6162 | 0.2 | 0.8594 | 6.2364 | 0.2000 | 0.0552 | 0.2466 | 0.7717 |
| No log | 6.0 | 78 | 1.5412 | 0.235 | 0.8416 | 6.0423 | 0.235 | 0.0902 | 0.2589 | 0.7006 |
| No log | 7.0 | 91 | 1.5011 | 0.295 | 0.8304 | 6.1420 | 0.295 | 0.1272 | 0.2803 | 0.6124 |
| No log | 8.0 | 104 | 1.4415 | 0.3 | 0.8114 | 6.0440 | 0.3 | 0.1296 | 0.2870 | 0.5641 |
| No log | 9.0 | 117 | 1.3257 | 0.38 | 0.7625 | 5.6923 | 0.38 | 0.2198 | 0.3136 | 0.3675 |
| No log | 10.0 | 130 | 1.3748 | 0.33 | 0.7905 | 5.5276 | 0.33 | 0.1870 | 0.2947 | 0.5985 |
| No log | 11.0 | 143 | 1.3294 | 0.39 | 0.7683 | 4.9632 | 0.39 | 0.2573 | 0.2940 | 0.4639 |
| No log | 12.0 | 156 | 1.2444 | 0.385 | 0.7297 | 4.8431 | 0.3850 | 0.2330 | 0.2849 | 0.4173 |
| No log | 13.0 | 169 | 1.2212 | 0.45 | 0.7153 | 4.5819 | 0.45 | 0.3051 | 0.3143 | 0.3379 |
| No log | 14.0 | 182 | 1.1835 | 0.495 | 0.6888 | 3.6108 | 0.495 | 0.3412 | 0.3316 | 0.2873 |
| No log | 15.0 | 195 | 1.1203 | 0.47 | 0.6559 | 3.6500 | 0.47 | 0.3348 | 0.2935 | 0.3061 |
| No log | 16.0 | 208 | 1.1520 | 0.495 | 0.6707 | 3.8106 | 0.495 | 0.3632 | 0.2938 | 0.3604 |
| No log | 17.0 | 221 | 1.0261 | 0.565 | 0.6021 | 3.3382 | 0.565 | 0.4214 | 0.2840 | 0.2047 |
| No log | 18.0 | 234 | 1.0080 | 0.61 | 0.5914 | 3.2936 | 0.61 | 0.4748 | 0.3240 | 0.1806 |
| No log | 19.0 | 247 | 1.0696 | 0.58 | 0.6253 | 3.2354 | 0.58 | 0.4686 | 0.3152 | 0.2626 |
| No log | 20.0 | 260 | 0.9733 | 0.615 | 0.5722 | 3.1019 | 0.615 | 0.4968 | 0.3259 | 0.2066 |
| No log | 21.0 | 273 | 0.9266 | 0.625 | 0.5423 | 3.0239 | 0.625 | 0.5202 | 0.2834 | 0.1782 |
| No log | 22.0 | 286 | 0.9364 | 0.66 | 0.5461 | 2.9031 | 0.66 | 0.5461 | 0.3128 | 0.1601 |
| No log | 23.0 | 299 | 0.9181 | 0.675 | 0.5307 | 2.8416 | 0.675 | 0.5584 | 0.3106 | 0.1462 |
| No log | 24.0 | 312 | 0.9739 | 0.665 | 0.5539 | 2.8798 | 0.665 | 0.5634 | 0.3325 | 0.1610 |
| No log | 25.0 | 325 | 0.8851 | 0.69 | 0.5099 | 2.7336 | 0.69 | 0.6013 | 0.3064 | 0.1437 |
| No log | 26.0 | 338 | 0.8755 | 0.71 | 0.4979 | 2.7400 | 0.7100 | 0.6032 | 0.3162 | 0.1211 |
| No log | 27.0 | 351 | 0.8653 | 0.675 | 0.4964 | 2.8339 | 0.675 | 0.5705 | 0.2977 | 0.1386 |
| No log | 28.0 | 364 | 0.8838 | 0.675 | 0.5055 | 2.7456 | 0.675 | 0.5816 | 0.2969 | 0.1524 |
| No log | 29.0 | 377 | 0.8805 | 0.68 | 0.5025 | 2.6942 | 0.68 | 0.5855 | 0.3099 | 0.1380 |
| No log | 30.0 | 390 | 0.8585 | 0.665 | 0.4891 | 2.7511 | 0.665 | 0.5737 | 0.2627 | 0.1370 |
| No log | 31.0 | 403 | 0.8410 | 0.675 | 0.4736 | 2.6431 | 0.675 | 0.5985 | 0.2670 | 0.1335 |
| No log | 32.0 | 416 | 0.8378 | 0.71 | 0.4724 | 2.7320 | 0.7100 | 0.6236 | 0.2885 | 0.1153 |
| No log | 33.0 | 429 | 0.8421 | 0.705 | 0.4718 | 2.6331 | 0.705 | 0.6326 | 0.2644 | 0.1147 |
| No log | 34.0 | 442 | 0.8350 | 0.685 | 0.4697 | 2.8035 | 0.685 | 0.6062 | 0.2831 | 0.1291 |
| No log | 35.0 | 455 | 0.8377 | 0.7 | 0.4708 | 2.4611 | 0.7 | 0.6376 | 0.3173 | 0.1195 |
| No log | 36.0 | 468 | 0.8126 | 0.69 | 0.4562 | 2.3909 | 0.69 | 0.6154 | 0.2433 | 0.1177 |
| No log | 37.0 | 481 | 0.8299 | 0.685 | 0.4673 | 2.5695 | 0.685 | 0.6080 | 0.2802 | 0.1261 |
| No log | 38.0 | 494 | 0.8197 | 0.685 | 0.4597 | 2.6388 | 0.685 | 0.6187 | 0.2690 | 0.1229 |
| 0.9314 | 39.0 | 507 | 0.8137 | 0.695 | 0.4547 | 2.7263 | 0.695 | 0.6332 | 0.2581 | 0.1207 |
| 0.9314 | 40.0 | 520 | 0.8168 | 0.69 | 0.4583 | 2.6230 | 0.69 | 0.6267 | 0.2696 | 0.1161 |
| 0.9314 | 41.0 | 533 | 0.8090 | 0.7 | 0.4529 | 2.6449 | 0.7 | 0.6236 | 0.2445 | 0.1187 |
| 0.9314 | 42.0 | 546 | 0.8168 | 0.68 | 0.4586 | 2.5516 | 0.68 | 0.6162 | 0.2722 | 0.1275 |
| 0.9314 | 43.0 | 559 | 0.8100 | 0.7 | 0.4523 | 2.5565 | 0.7 | 0.6347 | 0.2869 | 0.1192 |
| 0.9314 | 44.0 | 572 | 0.8078 | 0.7 | 0.4514 | 2.5734 | 0.7 | 0.6344 | 0.2583 | 0.1172 |
| 0.9314 | 45.0 | 585 | 0.8022 | 0.715 | 0.4472 | 2.4971 | 0.715 | 0.6534 | 0.2890 | 0.1165 |
| 0.9314 | 46.0 | 598 | 0.8049 | 0.695 | 0.4484 | 2.4891 | 0.695 | 0.6423 | 0.2722 | 0.1189 |
| 0.9314 | 47.0 | 611 | 0.8025 | 0.705 | 0.4481 | 2.4929 | 0.705 | 0.6393 | 0.2650 | 0.1124 |
| 0.9314 | 48.0 | 624 | 0.7973 | 0.7 | 0.4439 | 2.5000 | 0.7 | 0.6292 | 0.2718 | 0.1142 |
| 0.9314 | 49.0 | 637 | 0.8011 | 0.7 | 0.4464 | 2.5713 | 0.7 | 0.6303 | 0.2400 | 0.1183 |
| 0.9314 | 50.0 | 650 | 0.8012 | 0.7 | 0.4467 | 2.5682 | 0.7 | 0.6313 | 0.2684 | 0.1170 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
ipipan/nlpre_stanza_nkjp_fasttext_nkjp-by-type
|
ipipan
| 2023-10-03T19:35:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-29T08:04:40Z |
# Model description
- Morphosyntactic analyzer: Stanza
- Tagset: NKJP
- Embedding vectors: Fasttext (wiki)
- Dataset: NLPrePL-NKJP-fair-by-type (https://huggingface.co/datasets/ipipan/nlprepl)
# How to use
## Clone
```
git clone [email protected]:ipipan/nlpre_stanza_nkjp_fasttext_nkjp-by-type
```
## Load model
```
import stanza
lang = 'pl'
model_name = 'nlpre_stanza_nkjp_fasttext_nkjp-by-type'
prefix = 'nkjpbytype_nkjp'
config = \
{
# Comma-separated list of processors to use
'processors': 'tokenize,mwt,pos,lemma',
# Language code for the language to build the Pipeline in
'lang': lang,
# Processor-specific arguments are set with keys "{processor_name}_{argument_name}"
# You only need model paths if you have a specific model outside of stanza_resources
'tokenize_model_path': os.path.join(model_name, f'{lang}_{prefix}_tokenizer.pt'),
'mwt_model_path': os.path.join(model_name, f'{lang}_{prefix}_mwt_expander.pt'),
'pos_model_path': os.path.join(model_name, f'{lang}_{prefix}_tagger.pt'),
'pos_pretrain_path': os.path.join(model_name, f'{lang}_{prefix}.pretrain.pt'),
'lemma_model_path': os.path.join(model_name, f'{lang}_{prefix}_lemmatizer.pt'),
# Use pretokenized text as input and disable tokenization
'tokenize_pretokenized': True
}
model = stanza.Pipeline(**config)
|
bdpc/resnet101-base_tobacco-cnn_tobacco3482_kd_CEKD_t2.5_a0.5
|
bdpc
| 2023-10-03T19:23:04Z | 196 | 0 |
transformers
|
[
"transformers",
"pytorch",
"resnet",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-03T18:13:29Z |
---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet101-base_tobacco-cnn_tobacco3482_kd_CEKD_t2.5_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101-base_tobacco-cnn_tobacco3482_kd_CEKD_t2.5_a0.5
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6481
- Accuracy: 0.69
- Brier Loss: 0.4919
- Nll: 2.4969
- F1 Micro: 0.69
- F1 Macro: 0.6317
- Ece: 0.3029
- Aurc: 0.1260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 13 | 1.4796 | 0.165 | 0.8965 | 8.4885 | 0.165 | 0.1123 | 0.2151 | 0.8341 |
| No log | 2.0 | 26 | 1.4679 | 0.165 | 0.8954 | 8.3391 | 0.165 | 0.1066 | 0.2136 | 0.8332 |
| No log | 3.0 | 39 | 1.4170 | 0.21 | 0.8858 | 6.1941 | 0.2100 | 0.0969 | 0.2433 | 0.7991 |
| No log | 4.0 | 52 | 1.3472 | 0.21 | 0.8711 | 6.0602 | 0.2100 | 0.0728 | 0.2320 | 0.7271 |
| No log | 5.0 | 65 | 1.2776 | 0.19 | 0.8572 | 6.1293 | 0.19 | 0.0537 | 0.2422 | 0.7473 |
| No log | 6.0 | 78 | 1.1840 | 0.245 | 0.8353 | 6.2405 | 0.245 | 0.1060 | 0.2810 | 0.6690 |
| No log | 7.0 | 91 | 1.0740 | 0.365 | 0.7936 | 6.3617 | 0.3650 | 0.1739 | 0.3136 | 0.3646 |
| No log | 8.0 | 104 | 1.1102 | 0.345 | 0.8081 | 5.8896 | 0.345 | 0.1812 | 0.3046 | 0.4292 |
| No log | 9.0 | 117 | 1.0735 | 0.34 | 0.7963 | 5.9970 | 0.34 | 0.1842 | 0.3028 | 0.4286 |
| No log | 10.0 | 130 | 1.1145 | 0.265 | 0.8110 | 5.9054 | 0.265 | 0.1300 | 0.2511 | 0.6350 |
| No log | 11.0 | 143 | 0.9981 | 0.325 | 0.7659 | 5.3834 | 0.325 | 0.1655 | 0.2790 | 0.4860 |
| No log | 12.0 | 156 | 1.0500 | 0.285 | 0.7898 | 4.9696 | 0.285 | 0.1594 | 0.2604 | 0.6636 |
| No log | 13.0 | 169 | 0.8764 | 0.445 | 0.6976 | 4.6456 | 0.445 | 0.2647 | 0.2779 | 0.3020 |
| No log | 14.0 | 182 | 0.9147 | 0.48 | 0.7108 | 4.4793 | 0.48 | 0.2942 | 0.3262 | 0.2862 |
| No log | 15.0 | 195 | 0.9776 | 0.38 | 0.7434 | 4.4065 | 0.38 | 0.2269 | 0.2938 | 0.5297 |
| No log | 16.0 | 208 | 0.8066 | 0.47 | 0.6494 | 3.9671 | 0.47 | 0.2966 | 0.2791 | 0.2907 |
| No log | 17.0 | 221 | 0.7766 | 0.535 | 0.6305 | 3.5250 | 0.535 | 0.3866 | 0.3003 | 0.2424 |
| No log | 18.0 | 234 | 0.8186 | 0.535 | 0.6458 | 3.3670 | 0.535 | 0.3792 | 0.3005 | 0.2311 |
| No log | 19.0 | 247 | 0.8156 | 0.52 | 0.6430 | 3.1633 | 0.52 | 0.3675 | 0.3072 | 0.2667 |
| No log | 20.0 | 260 | 0.8386 | 0.55 | 0.6462 | 3.2549 | 0.55 | 0.4251 | 0.3103 | 0.2703 |
| No log | 21.0 | 273 | 0.7996 | 0.515 | 0.6342 | 3.1396 | 0.515 | 0.3969 | 0.3177 | 0.2867 |
| No log | 22.0 | 286 | 0.8605 | 0.6 | 0.6472 | 3.2563 | 0.6 | 0.4717 | 0.3810 | 0.2113 |
| No log | 23.0 | 299 | 0.7138 | 0.595 | 0.5713 | 3.1171 | 0.595 | 0.4657 | 0.2773 | 0.2034 |
| No log | 24.0 | 312 | 0.7212 | 0.665 | 0.5740 | 2.9688 | 0.665 | 0.5474 | 0.3366 | 0.1754 |
| No log | 25.0 | 325 | 0.7463 | 0.63 | 0.5843 | 2.8998 | 0.63 | 0.5502 | 0.3432 | 0.2072 |
| No log | 26.0 | 338 | 0.7231 | 0.67 | 0.5626 | 3.1334 | 0.67 | 0.5564 | 0.3160 | 0.1521 |
| No log | 27.0 | 351 | 0.6913 | 0.68 | 0.5427 | 2.8906 | 0.68 | 0.5702 | 0.3354 | 0.1406 |
| No log | 28.0 | 364 | 0.6825 | 0.66 | 0.5342 | 2.8619 | 0.66 | 0.5615 | 0.2902 | 0.1625 |
| No log | 29.0 | 377 | 0.7015 | 0.665 | 0.5549 | 2.7315 | 0.665 | 0.5741 | 0.3305 | 0.1769 |
| No log | 30.0 | 390 | 0.6939 | 0.67 | 0.5406 | 2.7114 | 0.67 | 0.5720 | 0.3353 | 0.1420 |
| No log | 31.0 | 403 | 0.6836 | 0.69 | 0.5265 | 2.7567 | 0.69 | 0.5982 | 0.3216 | 0.1455 |
| No log | 32.0 | 416 | 0.6728 | 0.69 | 0.5211 | 2.6858 | 0.69 | 0.6056 | 0.3124 | 0.1453 |
| No log | 33.0 | 429 | 0.6926 | 0.675 | 0.5403 | 2.5815 | 0.675 | 0.6095 | 0.3258 | 0.1683 |
| No log | 34.0 | 442 | 0.6673 | 0.66 | 0.5090 | 2.5591 | 0.66 | 0.5722 | 0.2950 | 0.1385 |
| No log | 35.0 | 455 | 0.6811 | 0.675 | 0.5207 | 2.5813 | 0.675 | 0.5841 | 0.3324 | 0.1273 |
| No log | 36.0 | 468 | 0.6648 | 0.69 | 0.5119 | 2.5745 | 0.69 | 0.6225 | 0.3433 | 0.1320 |
| No log | 37.0 | 481 | 0.6623 | 0.67 | 0.5092 | 2.6134 | 0.67 | 0.6129 | 0.3204 | 0.1471 |
| No log | 38.0 | 494 | 0.6635 | 0.69 | 0.5088 | 2.3862 | 0.69 | 0.6192 | 0.3201 | 0.1311 |
| 0.7628 | 39.0 | 507 | 0.6554 | 0.685 | 0.5008 | 2.5849 | 0.685 | 0.6210 | 0.3179 | 0.1377 |
| 0.7628 | 40.0 | 520 | 0.6567 | 0.685 | 0.5022 | 2.6498 | 0.685 | 0.6310 | 0.3127 | 0.1414 |
| 0.7628 | 41.0 | 533 | 0.6558 | 0.695 | 0.4996 | 2.5917 | 0.695 | 0.6347 | 0.3115 | 0.1321 |
| 0.7628 | 42.0 | 546 | 0.6578 | 0.695 | 0.5021 | 2.4864 | 0.695 | 0.6259 | 0.3098 | 0.1306 |
| 0.7628 | 43.0 | 559 | 0.6544 | 0.685 | 0.4969 | 2.5757 | 0.685 | 0.6175 | 0.2955 | 0.1342 |
| 0.7628 | 44.0 | 572 | 0.6507 | 0.685 | 0.4944 | 2.5057 | 0.685 | 0.6257 | 0.3144 | 0.1304 |
| 0.7628 | 45.0 | 585 | 0.6501 | 0.675 | 0.4937 | 2.4903 | 0.675 | 0.6208 | 0.3091 | 0.1301 |
| 0.7628 | 46.0 | 598 | 0.6518 | 0.685 | 0.4949 | 2.4732 | 0.685 | 0.6254 | 0.3164 | 0.1235 |
| 0.7628 | 47.0 | 611 | 0.6499 | 0.685 | 0.4936 | 2.4924 | 0.685 | 0.6273 | 0.3124 | 0.1323 |
| 0.7628 | 48.0 | 624 | 0.6490 | 0.7 | 0.4925 | 2.4999 | 0.7 | 0.6353 | 0.3147 | 0.1243 |
| 0.7628 | 49.0 | 637 | 0.6510 | 0.685 | 0.4933 | 2.5758 | 0.685 | 0.6242 | 0.3206 | 0.1281 |
| 0.7628 | 50.0 | 650 | 0.6481 | 0.69 | 0.4919 | 2.4969 | 0.69 | 0.6317 | 0.3029 | 0.1260 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
bdpc/resnet101-base_tobacco-cnn_tobacco3482_kd_CEKD_t1.5_a0.9
|
bdpc
| 2023-10-03T19:06:28Z | 197 | 0 |
transformers
|
[
"transformers",
"pytorch",
"resnet",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-03T18:52:55Z |
---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet101-base_tobacco-cnn_tobacco3482_kd_CEKD_t1.5_a0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101-base_tobacco-cnn_tobacco3482_kd_CEKD_t1.5_a0.9
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8831
- Accuracy: 0.695
- Brier Loss: 0.4126
- Nll: 2.4628
- F1 Micro: 0.695
- F1 Macro: 0.6387
- Ece: 0.2426
- Aurc: 0.1068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 13 | 2.1233 | 0.16 | 0.8967 | 8.5697 | 0.16 | 0.1066 | 0.2078 | 0.8316 |
| No log | 2.0 | 26 | 2.1188 | 0.14 | 0.8961 | 8.2960 | 0.14 | 0.0886 | 0.1947 | 0.8419 |
| No log | 3.0 | 39 | 2.0764 | 0.195 | 0.8873 | 6.4713 | 0.195 | 0.1159 | 0.2335 | 0.7665 |
| No log | 4.0 | 52 | 2.0365 | 0.21 | 0.8787 | 5.7752 | 0.2100 | 0.0930 | 0.2376 | 0.7548 |
| No log | 5.0 | 65 | 1.9888 | 0.2 | 0.8682 | 5.8737 | 0.2000 | 0.0775 | 0.2417 | 0.7314 |
| No log | 6.0 | 78 | 1.8998 | 0.215 | 0.8465 | 5.8553 | 0.2150 | 0.0970 | 0.2586 | 0.7063 |
| No log | 7.0 | 91 | 1.8351 | 0.33 | 0.8289 | 5.7781 | 0.33 | 0.1904 | 0.3089 | 0.6103 |
| No log | 8.0 | 104 | 1.7342 | 0.4 | 0.7968 | 5.5366 | 0.4000 | 0.2476 | 0.3457 | 0.4276 |
| No log | 9.0 | 117 | 1.6787 | 0.36 | 0.7757 | 5.7414 | 0.36 | 0.2148 | 0.3062 | 0.4324 |
| No log | 10.0 | 130 | 1.6942 | 0.4 | 0.7870 | 5.2615 | 0.4000 | 0.2831 | 0.3168 | 0.5227 |
| No log | 11.0 | 143 | 1.5992 | 0.4 | 0.7489 | 4.7833 | 0.4000 | 0.2649 | 0.3053 | 0.4679 |
| No log | 12.0 | 156 | 1.6071 | 0.425 | 0.7532 | 4.2803 | 0.425 | 0.2906 | 0.3196 | 0.4646 |
| No log | 13.0 | 169 | 1.4727 | 0.48 | 0.6925 | 4.1911 | 0.48 | 0.3239 | 0.2957 | 0.3081 |
| No log | 14.0 | 182 | 1.4275 | 0.515 | 0.6705 | 3.7980 | 0.515 | 0.3569 | 0.3211 | 0.2626 |
| No log | 15.0 | 195 | 1.3282 | 0.56 | 0.6200 | 3.6359 | 0.56 | 0.4163 | 0.2990 | 0.2213 |
| No log | 16.0 | 208 | 1.3280 | 0.565 | 0.6263 | 3.4960 | 0.565 | 0.4177 | 0.3217 | 0.2346 |
| No log | 17.0 | 221 | 1.3220 | 0.595 | 0.6196 | 3.2202 | 0.595 | 0.4639 | 0.3322 | 0.1992 |
| No log | 18.0 | 234 | 1.2359 | 0.595 | 0.5840 | 3.3332 | 0.595 | 0.4780 | 0.3042 | 0.2011 |
| No log | 19.0 | 247 | 1.1690 | 0.625 | 0.5531 | 3.2423 | 0.625 | 0.5233 | 0.2940 | 0.1807 |
| No log | 20.0 | 260 | 1.1644 | 0.64 | 0.5532 | 3.0542 | 0.64 | 0.5429 | 0.3019 | 0.1821 |
| No log | 21.0 | 273 | 1.1611 | 0.62 | 0.5516 | 2.9412 | 0.62 | 0.5193 | 0.2865 | 0.2160 |
| No log | 22.0 | 286 | 1.3427 | 0.585 | 0.6361 | 3.0936 | 0.585 | 0.5089 | 0.3442 | 0.2922 |
| No log | 23.0 | 299 | 1.1238 | 0.62 | 0.5440 | 2.7924 | 0.62 | 0.5458 | 0.2654 | 0.2088 |
| No log | 24.0 | 312 | 1.2008 | 0.685 | 0.5615 | 2.5918 | 0.685 | 0.5890 | 0.3907 | 0.1516 |
| No log | 25.0 | 325 | 1.0764 | 0.695 | 0.5000 | 2.6354 | 0.695 | 0.6107 | 0.3126 | 0.1397 |
| No log | 26.0 | 338 | 1.0268 | 0.675 | 0.4822 | 2.4798 | 0.675 | 0.5992 | 0.2775 | 0.1229 |
| No log | 27.0 | 351 | 1.0340 | 0.67 | 0.4893 | 2.4316 | 0.67 | 0.5997 | 0.2763 | 0.1638 |
| No log | 28.0 | 364 | 1.0154 | 0.665 | 0.4769 | 2.6487 | 0.665 | 0.6034 | 0.2590 | 0.1487 |
| No log | 29.0 | 377 | 1.0013 | 0.64 | 0.4814 | 2.5899 | 0.64 | 0.5771 | 0.2429 | 0.1593 |
| No log | 30.0 | 390 | 1.0173 | 0.685 | 0.4714 | 2.6922 | 0.685 | 0.6178 | 0.2898 | 0.1423 |
| No log | 31.0 | 403 | 0.9630 | 0.695 | 0.4509 | 2.6349 | 0.695 | 0.6206 | 0.2746 | 0.1248 |
| No log | 32.0 | 416 | 0.9950 | 0.68 | 0.4648 | 2.4144 | 0.68 | 0.6362 | 0.3020 | 0.1725 |
| No log | 33.0 | 429 | 0.9711 | 0.72 | 0.4502 | 2.6651 | 0.72 | 0.6571 | 0.2892 | 0.1268 |
| No log | 34.0 | 442 | 0.9491 | 0.705 | 0.4425 | 2.7169 | 0.705 | 0.6425 | 0.2541 | 0.1145 |
| No log | 35.0 | 455 | 0.9213 | 0.685 | 0.4309 | 2.5736 | 0.685 | 0.6174 | 0.2380 | 0.1161 |
| No log | 36.0 | 468 | 0.9144 | 0.695 | 0.4297 | 2.4141 | 0.695 | 0.6308 | 0.2502 | 0.1154 |
| No log | 37.0 | 481 | 0.9242 | 0.715 | 0.4264 | 2.7191 | 0.715 | 0.6429 | 0.2386 | 0.1030 |
| No log | 38.0 | 494 | 0.9290 | 0.695 | 0.4346 | 2.6515 | 0.695 | 0.6367 | 0.2432 | 0.1189 |
| 1.0953 | 39.0 | 507 | 0.9110 | 0.69 | 0.4262 | 2.6615 | 0.69 | 0.6328 | 0.2368 | 0.1112 |
| 1.0953 | 40.0 | 520 | 0.9000 | 0.695 | 0.4186 | 2.4590 | 0.695 | 0.6417 | 0.2453 | 0.1070 |
| 1.0953 | 41.0 | 533 | 0.8961 | 0.69 | 0.4189 | 2.4170 | 0.69 | 0.6368 | 0.2349 | 0.1090 |
| 1.0953 | 42.0 | 546 | 0.9103 | 0.675 | 0.4286 | 2.6129 | 0.675 | 0.6193 | 0.2318 | 0.1190 |
| 1.0953 | 43.0 | 559 | 0.8858 | 0.715 | 0.4131 | 2.5243 | 0.715 | 0.6517 | 0.2462 | 0.1053 |
| 1.0953 | 44.0 | 572 | 0.8872 | 0.705 | 0.4135 | 2.3272 | 0.705 | 0.6542 | 0.2596 | 0.1051 |
| 1.0953 | 45.0 | 585 | 0.8897 | 0.715 | 0.4136 | 2.3788 | 0.715 | 0.6532 | 0.2560 | 0.1035 |
| 1.0953 | 46.0 | 598 | 0.8842 | 0.7 | 0.4117 | 2.5325 | 0.7 | 0.6446 | 0.2327 | 0.1075 |
| 1.0953 | 47.0 | 611 | 0.8857 | 0.675 | 0.4141 | 2.5451 | 0.675 | 0.6203 | 0.2473 | 0.1125 |
| 1.0953 | 48.0 | 624 | 0.8875 | 0.69 | 0.4164 | 2.4696 | 0.69 | 0.6352 | 0.2542 | 0.1109 |
| 1.0953 | 49.0 | 637 | 0.8842 | 0.69 | 0.4153 | 2.5338 | 0.69 | 0.6358 | 0.2302 | 0.1112 |
| 1.0953 | 50.0 | 650 | 0.8831 | 0.695 | 0.4126 | 2.4628 | 0.695 | 0.6387 | 0.2426 | 0.1068 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
Soheil-FM/falcon-7b-chatbot
|
Soheil-FM
| 2023-10-03T19:01:09Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-03T19:01:03Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
bdpc/resnet101-base_tobacco-cnn_tobacco3482_kd_CEKD_t1.5_a0.7
|
bdpc
| 2023-10-03T18:52:29Z | 196 | 0 |
transformers
|
[
"transformers",
"pytorch",
"resnet",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-03T18:38:52Z |
---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet101-base_tobacco-cnn_tobacco3482_kd_CEKD_t1.5_a0.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101-base_tobacco-cnn_tobacco3482_kd_CEKD_t1.5_a0.7
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8009
- Accuracy: 0.695
- Brier Loss: 0.4518
- Nll: 2.3840
- F1 Micro: 0.695
- F1 Macro: 0.6406
- Ece: 0.2661
- Aurc: 0.1211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 13 | 1.7971 | 0.17 | 0.8966 | 8.4593 | 0.17 | 0.1148 | 0.2202 | 0.8308 |
| No log | 2.0 | 26 | 1.7887 | 0.13 | 0.8956 | 8.3211 | 0.13 | 0.0772 | 0.2024 | 0.8359 |
| No log | 3.0 | 39 | 1.7450 | 0.225 | 0.8868 | 6.4554 | 0.225 | 0.1165 | 0.2502 | 0.7572 |
| No log | 4.0 | 52 | 1.6811 | 0.24 | 0.8733 | 5.9510 | 0.24 | 0.0953 | 0.2651 | 0.6944 |
| No log | 5.0 | 65 | 1.6411 | 0.19 | 0.8649 | 6.0993 | 0.19 | 0.0493 | 0.2422 | 0.7783 |
| No log | 6.0 | 78 | 1.5475 | 0.195 | 0.8429 | 6.2065 | 0.195 | 0.0630 | 0.2472 | 0.7110 |
| No log | 7.0 | 91 | 1.4688 | 0.3 | 0.8197 | 6.0345 | 0.3 | 0.1481 | 0.2936 | 0.5379 |
| No log | 8.0 | 104 | 1.5036 | 0.285 | 0.8294 | 5.6660 | 0.285 | 0.1428 | 0.2869 | 0.6535 |
| No log | 9.0 | 117 | 1.3901 | 0.34 | 0.7934 | 5.9107 | 0.34 | 0.1853 | 0.2894 | 0.5277 |
| No log | 10.0 | 130 | 1.3484 | 0.37 | 0.7760 | 5.6441 | 0.37 | 0.2175 | 0.3177 | 0.5266 |
| No log | 11.0 | 143 | 1.3375 | 0.34 | 0.7734 | 5.0872 | 0.34 | 0.2083 | 0.2902 | 0.5557 |
| No log | 12.0 | 156 | 1.3639 | 0.305 | 0.7834 | 4.5070 | 0.305 | 0.1885 | 0.2674 | 0.6177 |
| No log | 13.0 | 169 | 1.2321 | 0.415 | 0.7225 | 4.3464 | 0.415 | 0.2751 | 0.2943 | 0.3825 |
| No log | 14.0 | 182 | 1.1453 | 0.44 | 0.6767 | 4.4158 | 0.44 | 0.2864 | 0.2617 | 0.3413 |
| No log | 15.0 | 195 | 1.1830 | 0.43 | 0.6965 | 3.8251 | 0.4300 | 0.2972 | 0.2912 | 0.4239 |
| No log | 16.0 | 208 | 1.0572 | 0.535 | 0.6230 | 3.5943 | 0.535 | 0.3758 | 0.2861 | 0.2291 |
| No log | 17.0 | 221 | 1.0532 | 0.585 | 0.6151 | 3.3834 | 0.585 | 0.4331 | 0.3278 | 0.1879 |
| No log | 18.0 | 234 | 1.0940 | 0.565 | 0.6374 | 3.2290 | 0.565 | 0.4431 | 0.3313 | 0.2415 |
| No log | 19.0 | 247 | 0.9877 | 0.585 | 0.5886 | 3.1068 | 0.585 | 0.4564 | 0.2896 | 0.2110 |
| No log | 20.0 | 260 | 1.0405 | 0.61 | 0.6056 | 3.1786 | 0.61 | 0.5038 | 0.3428 | 0.1962 |
| No log | 21.0 | 273 | 0.9728 | 0.635 | 0.5634 | 2.9133 | 0.635 | 0.5293 | 0.3333 | 0.1664 |
| No log | 22.0 | 286 | 0.9425 | 0.635 | 0.5527 | 2.8909 | 0.635 | 0.5237 | 0.3131 | 0.1796 |
| No log | 23.0 | 299 | 0.9549 | 0.65 | 0.5605 | 2.8074 | 0.65 | 0.5539 | 0.3283 | 0.1914 |
| No log | 24.0 | 312 | 1.0085 | 0.67 | 0.5733 | 2.8377 | 0.67 | 0.5543 | 0.3525 | 0.1571 |
| No log | 25.0 | 325 | 0.9140 | 0.655 | 0.5257 | 2.5878 | 0.655 | 0.5603 | 0.3171 | 0.1495 |
| No log | 26.0 | 338 | 0.8979 | 0.65 | 0.5249 | 2.7723 | 0.65 | 0.5563 | 0.2843 | 0.1646 |
| No log | 27.0 | 351 | 0.8912 | 0.675 | 0.5082 | 2.6562 | 0.675 | 0.5837 | 0.2871 | 0.1380 |
| No log | 28.0 | 364 | 0.8966 | 0.66 | 0.5242 | 2.3150 | 0.66 | 0.5890 | 0.3180 | 0.1777 |
| No log | 29.0 | 377 | 0.8602 | 0.67 | 0.4959 | 2.5813 | 0.67 | 0.5866 | 0.3023 | 0.1319 |
| No log | 30.0 | 390 | 0.8434 | 0.69 | 0.4779 | 2.5451 | 0.69 | 0.6130 | 0.3061 | 0.1188 |
| No log | 31.0 | 403 | 0.8406 | 0.715 | 0.4782 | 2.3339 | 0.715 | 0.6438 | 0.3241 | 0.1092 |
| No log | 32.0 | 416 | 0.8294 | 0.71 | 0.4726 | 2.5394 | 0.7100 | 0.6308 | 0.2922 | 0.1218 |
| No log | 33.0 | 429 | 0.8329 | 0.68 | 0.4763 | 2.4520 | 0.68 | 0.6166 | 0.2592 | 0.1396 |
| No log | 34.0 | 442 | 0.8937 | 0.69 | 0.5015 | 2.5649 | 0.69 | 0.6357 | 0.3293 | 0.1279 |
| No log | 35.0 | 455 | 0.8358 | 0.665 | 0.4807 | 2.4437 | 0.665 | 0.6178 | 0.2380 | 0.1473 |
| No log | 36.0 | 468 | 0.8283 | 0.685 | 0.4747 | 2.5408 | 0.685 | 0.6304 | 0.3126 | 0.1361 |
| No log | 37.0 | 481 | 0.8235 | 0.685 | 0.4707 | 2.4620 | 0.685 | 0.6300 | 0.2757 | 0.1343 |
| No log | 38.0 | 494 | 0.8289 | 0.68 | 0.4778 | 2.5443 | 0.68 | 0.6305 | 0.2935 | 0.1469 |
| 0.9462 | 39.0 | 507 | 0.8373 | 0.69 | 0.4728 | 2.5775 | 0.69 | 0.6281 | 0.3028 | 0.1149 |
| 0.9462 | 40.0 | 520 | 0.8062 | 0.715 | 0.4548 | 2.3673 | 0.715 | 0.6587 | 0.2776 | 0.1133 |
| 0.9462 | 41.0 | 533 | 0.7990 | 0.705 | 0.4517 | 2.3284 | 0.705 | 0.6463 | 0.2716 | 0.1185 |
| 0.9462 | 42.0 | 546 | 0.8210 | 0.7 | 0.4650 | 2.5646 | 0.7 | 0.6432 | 0.2690 | 0.1199 |
| 0.9462 | 43.0 | 559 | 0.8102 | 0.695 | 0.4558 | 2.5651 | 0.695 | 0.6442 | 0.2656 | 0.1184 |
| 0.9462 | 44.0 | 572 | 0.8061 | 0.69 | 0.4566 | 2.5154 | 0.69 | 0.6356 | 0.2816 | 0.1267 |
| 0.9462 | 45.0 | 585 | 0.8018 | 0.7 | 0.4531 | 2.4982 | 0.7 | 0.6419 | 0.2696 | 0.1192 |
| 0.9462 | 46.0 | 598 | 0.8040 | 0.7 | 0.4521 | 2.5309 | 0.7 | 0.6448 | 0.2797 | 0.1166 |
| 0.9462 | 47.0 | 611 | 0.8062 | 0.68 | 0.4560 | 2.5452 | 0.68 | 0.6370 | 0.2744 | 0.1217 |
| 0.9462 | 48.0 | 624 | 0.8011 | 0.69 | 0.4529 | 2.4281 | 0.69 | 0.6402 | 0.2594 | 0.1224 |
| 0.9462 | 49.0 | 637 | 0.8017 | 0.69 | 0.4532 | 2.4239 | 0.69 | 0.6400 | 0.2613 | 0.1261 |
| 0.9462 | 50.0 | 650 | 0.8009 | 0.695 | 0.4518 | 2.3840 | 0.695 | 0.6406 | 0.2661 | 0.1211 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
Palunay/I1
|
Palunay
| 2023-10-03T18:45:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-10-03T18:41:23Z |
young demoness girl,(strong physique), 4 horns, 2 on the front side, 2 larger on the sides,red yellow brown curly hair,gray-blue eyes with an orange tinge, here is a small hump on the nose, the nose somewhat resembles the nose of a cat, in addition, her ears look like a cat's, but they are always slightly lowered, because they are in the place of ordinary human ones, Red lips and protruding fangs. Loose hair. The chest is medium-sized, the hips are visible, the muscles of the buttocks are large, as are the muscles of the arms. He wears black armor with gold inserts and green precious stones and large earrings, on his feet and hands he has strong claws. There is also a strong one. The tail is similar to the tail of a jasher and a cat at the same time. They are wearing large black shoulder pads, black dragon bone trousers, a huge bib and knee-high boots. On the back of 2 bolshie sharp demonic wings
|
Terps/whisper-small-dv
|
Terps
| 2023-10-03T18:42:46Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-10-02T19:46:49Z |
---
language:
- dv
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 12.72733595298536
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1677
- Wer Ortho: 62.0238
- Wer: 12.7273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1225 | 1.63 | 500 | 0.1677 | 62.0238 | 12.7273 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
bdpc/resnet101-base_tobacco-cnn_tobacco3482_kd_CEKD_t1.5_a0.5
|
bdpc
| 2023-10-03T18:38:27Z | 199 | 0 |
transformers
|
[
"transformers",
"pytorch",
"resnet",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-03T18:20:05Z |
---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet101-base_tobacco-cnn_tobacco3482_kd_CEKD_t1.5_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101-base_tobacco-cnn_tobacco3482_kd_CEKD_t1.5_a0.5
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6500
- Accuracy: 0.69
- Brier Loss: 0.5003
- Nll: 2.5629
- F1 Micro: 0.69
- F1 Macro: 0.6350
- Ece: 0.3098
- Aurc: 0.1329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 13 | 1.4712 | 0.165 | 0.8966 | 8.4652 | 0.165 | 0.1101 | 0.2129 | 0.8342 |
| No log | 2.0 | 26 | 1.4590 | 0.165 | 0.8951 | 8.1097 | 0.165 | 0.1059 | 0.2034 | 0.8021 |
| No log | 3.0 | 39 | 1.4178 | 0.175 | 0.8873 | 6.8095 | 0.175 | 0.0813 | 0.2150 | 0.7994 |
| No log | 4.0 | 52 | 1.3342 | 0.18 | 0.8702 | 6.4137 | 0.18 | 0.0475 | 0.2314 | 0.7558 |
| No log | 5.0 | 65 | 1.2828 | 0.2 | 0.8587 | 6.1547 | 0.2000 | 0.0642 | 0.2429 | 0.7009 |
| No log | 6.0 | 78 | 1.2675 | 0.205 | 0.8548 | 6.1395 | 0.205 | 0.0612 | 0.2348 | 0.7022 |
| No log | 7.0 | 91 | 1.0716 | 0.31 | 0.7962 | 6.4589 | 0.31 | 0.1241 | 0.2787 | 0.4433 |
| No log | 8.0 | 104 | 1.1184 | 0.29 | 0.8126 | 6.2585 | 0.29 | 0.1394 | 0.2863 | 0.5819 |
| No log | 9.0 | 117 | 1.1021 | 0.31 | 0.8075 | 6.0370 | 0.31 | 0.1697 | 0.2834 | 0.5458 |
| No log | 10.0 | 130 | 1.0268 | 0.33 | 0.7815 | 6.1370 | 0.33 | 0.1921 | 0.2856 | 0.5395 |
| No log | 11.0 | 143 | 1.0290 | 0.355 | 0.7759 | 5.3640 | 0.3550 | 0.2143 | 0.2795 | 0.4697 |
| No log | 12.0 | 156 | 0.9169 | 0.36 | 0.7262 | 5.2997 | 0.36 | 0.1995 | 0.2761 | 0.4070 |
| No log | 13.0 | 169 | 0.9903 | 0.36 | 0.7586 | 4.9404 | 0.36 | 0.2200 | 0.2832 | 0.5343 |
| No log | 14.0 | 182 | 0.9128 | 0.425 | 0.7082 | 4.5862 | 0.425 | 0.2706 | 0.2834 | 0.3542 |
| No log | 15.0 | 195 | 1.0046 | 0.405 | 0.7441 | 3.9763 | 0.405 | 0.2759 | 0.3142 | 0.4602 |
| No log | 16.0 | 208 | 0.9277 | 0.41 | 0.7146 | 4.3670 | 0.41 | 0.2763 | 0.2695 | 0.4409 |
| No log | 17.0 | 221 | 0.9726 | 0.505 | 0.7208 | 3.5350 | 0.505 | 0.3736 | 0.3332 | 0.3469 |
| No log | 18.0 | 234 | 0.7717 | 0.505 | 0.6280 | 3.4386 | 0.505 | 0.3412 | 0.2564 | 0.2567 |
| No log | 19.0 | 247 | 0.7723 | 0.58 | 0.6143 | 3.6207 | 0.58 | 0.4125 | 0.3178 | 0.1847 |
| No log | 20.0 | 260 | 0.8182 | 0.57 | 0.6419 | 3.1633 | 0.57 | 0.4855 | 0.3517 | 0.2530 |
| No log | 21.0 | 273 | 0.7333 | 0.58 | 0.5891 | 3.3014 | 0.58 | 0.4512 | 0.2718 | 0.2137 |
| No log | 22.0 | 286 | 0.7374 | 0.665 | 0.5856 | 3.0299 | 0.665 | 0.5432 | 0.3459 | 0.1657 |
| No log | 23.0 | 299 | 0.7083 | 0.645 | 0.5564 | 3.0874 | 0.645 | 0.5180 | 0.3112 | 0.1608 |
| No log | 24.0 | 312 | 0.7480 | 0.64 | 0.5901 | 3.0218 | 0.64 | 0.5410 | 0.3701 | 0.1976 |
| No log | 25.0 | 325 | 0.7547 | 0.68 | 0.5894 | 2.9002 | 0.68 | 0.5801 | 0.3817 | 0.1559 |
| No log | 26.0 | 338 | 0.6998 | 0.65 | 0.5474 | 2.9402 | 0.65 | 0.5468 | 0.2875 | 0.1707 |
| No log | 27.0 | 351 | 0.6967 | 0.66 | 0.5506 | 2.8344 | 0.66 | 0.5578 | 0.3105 | 0.1707 |
| No log | 28.0 | 364 | 0.6733 | 0.655 | 0.5332 | 2.6492 | 0.655 | 0.5719 | 0.2935 | 0.1554 |
| No log | 29.0 | 377 | 0.7162 | 0.67 | 0.5596 | 2.7250 | 0.67 | 0.5721 | 0.3388 | 0.1423 |
| No log | 30.0 | 390 | 0.6826 | 0.665 | 0.5291 | 2.7460 | 0.665 | 0.5797 | 0.3353 | 0.1469 |
| No log | 31.0 | 403 | 0.6761 | 0.665 | 0.5195 | 2.7938 | 0.665 | 0.5647 | 0.3096 | 0.1485 |
| No log | 32.0 | 416 | 0.6745 | 0.695 | 0.5295 | 2.6172 | 0.695 | 0.6160 | 0.3171 | 0.1636 |
| No log | 33.0 | 429 | 0.6785 | 0.695 | 0.5242 | 2.5816 | 0.695 | 0.6115 | 0.3475 | 0.1349 |
| No log | 34.0 | 442 | 0.6688 | 0.665 | 0.5174 | 2.6401 | 0.665 | 0.5833 | 0.2988 | 0.1427 |
| No log | 35.0 | 455 | 0.6767 | 0.675 | 0.5275 | 2.6364 | 0.675 | 0.6027 | 0.3285 | 0.1483 |
| No log | 36.0 | 468 | 0.6605 | 0.695 | 0.5076 | 2.6483 | 0.695 | 0.6252 | 0.3127 | 0.1372 |
| No log | 37.0 | 481 | 0.6538 | 0.705 | 0.5029 | 2.6284 | 0.705 | 0.6340 | 0.3173 | 0.1220 |
| No log | 38.0 | 494 | 0.6610 | 0.695 | 0.5102 | 2.5052 | 0.695 | 0.6375 | 0.3128 | 0.1298 |
| 0.7532 | 39.0 | 507 | 0.6618 | 0.695 | 0.5110 | 2.5663 | 0.695 | 0.6268 | 0.3297 | 0.1367 |
| 0.7532 | 40.0 | 520 | 0.6749 | 0.69 | 0.5235 | 2.5343 | 0.69 | 0.6341 | 0.3256 | 0.1332 |
| 0.7532 | 41.0 | 533 | 0.6574 | 0.695 | 0.5062 | 2.4223 | 0.695 | 0.6338 | 0.3292 | 0.1469 |
| 0.7532 | 42.0 | 546 | 0.6530 | 0.695 | 0.5026 | 2.6189 | 0.695 | 0.6390 | 0.2950 | 0.1391 |
| 0.7532 | 43.0 | 559 | 0.6509 | 0.685 | 0.5003 | 2.5417 | 0.685 | 0.6299 | 0.3150 | 0.1368 |
| 0.7532 | 44.0 | 572 | 0.6520 | 0.71 | 0.5030 | 2.4796 | 0.7100 | 0.6453 | 0.3251 | 0.1286 |
| 0.7532 | 45.0 | 585 | 0.6494 | 0.69 | 0.4994 | 2.5431 | 0.69 | 0.6327 | 0.3138 | 0.1279 |
| 0.7532 | 46.0 | 598 | 0.6515 | 0.71 | 0.5007 | 2.5295 | 0.7100 | 0.6541 | 0.3307 | 0.1208 |
| 0.7532 | 47.0 | 611 | 0.6477 | 0.69 | 0.4979 | 2.5971 | 0.69 | 0.6323 | 0.3263 | 0.1281 |
| 0.7532 | 48.0 | 624 | 0.6495 | 0.7 | 0.5007 | 2.6162 | 0.7 | 0.6395 | 0.3412 | 0.1272 |
| 0.7532 | 49.0 | 637 | 0.6478 | 0.7 | 0.4968 | 2.4946 | 0.7 | 0.6386 | 0.3191 | 0.1309 |
| 0.7532 | 50.0 | 650 | 0.6500 | 0.69 | 0.5003 | 2.5629 | 0.69 | 0.6350 | 0.3098 | 0.1329 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
mehle/anders
|
mehle
| 2023-10-03T18:22:33Z | 0 | 0 | null |
[
"audio-to-audio",
"da",
"en",
"license:apache-2.0",
"region:us"
] |
audio-to-audio
| 2023-10-03T18:21:37Z |
---
license: apache-2.0
language:
- da
- en
pipeline_tag: audio-to-audio
---
|
teknium/airoboros-mistral2.2-7b
|
teknium
| 2023-10-03T18:18:12Z | 62 | 36 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"llama-2",
"instruct",
"finetune",
"alpaca",
"gpt4",
"synthetic data",
"distillation",
"en",
"dataset:jondurbin/airoboros-2.2.1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-03T15:07:51Z |
---
base_model: mistralai/Mistral-7b-V0.1
tags:
- llama-2
- instruct
- finetune
- alpaca
- gpt4
- synthetic data
- distillation
datasets:
- jondurbin/airoboros-2.2.1
model-index:
- name: airoboros2.2-mistral-7b
results: []
license: mit
language:
- en
---
Mistral trained with the airoboros dataset!

Actual dataset is airoboros 2.2, but it seems to have been replaced on hf with 2.2.1.
Prompt Format:
```
USER: <prompt>
ASSISTANT:
```
TruthfulQA:
```
hf-causal-experimental (pretrained=/home/teknium/dakota/lm-evaluation-harness/airoboros2.2-mistral/,dtype=float16), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.3562|± |0.0168|
| | |mc2 |0.5217|± |0.0156|
```
Wandb training charts: https://wandb.ai/teknium1/airoboros-mistral-7b/runs/airoboros-mistral-1?workspace=user-teknium1
More info to come
|
LoneStriker/dolphin-2.0-mistral-7b-6.0bpw-h6-exl2
|
LoneStriker
| 2023-10-03T17:59:38Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-03T17:32:05Z |
---
license: apache-2.0
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
language:
- en
---
Dolphin 2.0 🐬
https://erichartford.com/dolphin
Dolphin-2.0-mistral-7b's training was sponsored by [a16z](https://a16z.com/supporting-the-open-source-ai-community/).
This model is based on mistralAI, so it is suitable for commercial or non-commercial use.
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
You are responsible for any content you create using this model. Enjoy responsibly.
## Dataset
This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
I modified the dataset for uncensoring, deduping, cleaning, and quality.
I added Jon Durbin's excellent Airoboros dataset to increase creativity.
## Training
It took 48 hours to train 10 epochs on 4x A100s.
Prompt format:
This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format.
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
```
Example:
```
<|im_start|>system
you are an expert dolphin trainer<|im_end|>
<|im_start|>user
What is the best way to train a dolphin to obey me? Please answer step by step.<|im_end|>
```
## Gratitude
- This model was made possible by the generous sponsorship of a16z.
- Thank you to Microsoft for authoring the Orca paper and inspiring this work.
- Special thanks to WingLian, and TheBloke for helpful advice
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
## Example Output

[Buy me a coffee](https://www.buymeacoffee.com/ehartford)
|
Dugald/MarketingModel
|
Dugald
| 2023-10-03T17:29:13Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-03T17:29:07Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
Bainbridge/umberto-uncased-covid-sentiment
|
Bainbridge
| 2023-10-03T17:16:42Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"text-classification",
"generated_from_trainer",
"base_model:Musixmatch/umberto-wikipedia-uncased-v1",
"base_model:finetune:Musixmatch/umberto-wikipedia-uncased-v1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-03T17:16:17Z |
---
base_model: Musixmatch/umberto-wikipedia-uncased-v1
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: umberto-wikipedia-uncased-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# umberto-wikipedia-uncased-v1
This model is a fine-tuned version of [Musixmatch/umberto-wikipedia-uncased-v1](https://huggingface.co/Musixmatch/umberto-wikipedia-uncased-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4643
- F1: 0.6160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5923 | 0.33 | 20 | 0.6075 | 0.2974 |
| 0.6694 | 0.67 | 40 | 0.5938 | 0.2974 |
| 0.5225 | 1.0 | 60 | 0.5346 | 0.3024 |
| 0.5458 | 1.33 | 80 | 0.5172 | 0.4682 |
| 0.5052 | 1.67 | 100 | 0.5550 | 0.4361 |
| 0.3689 | 2.0 | 120 | 0.5188 | 0.4395 |
| 0.4097 | 2.33 | 140 | 0.4918 | 0.4676 |
| 0.4904 | 2.67 | 160 | 0.4756 | 0.4640 |
| 0.3696 | 3.0 | 180 | 0.4788 | 0.4688 |
| 0.3631 | 3.33 | 200 | 0.4751 | 0.4735 |
| 0.4345 | 3.67 | 220 | 0.4609 | 0.4797 |
| 0.3398 | 4.0 | 240 | 0.4637 | 0.5673 |
| 0.2985 | 4.33 | 260 | 0.4850 | 0.5092 |
| 0.3248 | 4.67 | 280 | 0.5089 | 0.6057 |
| 0.3323 | 5.0 | 300 | 0.4643 | 0.6160 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.1.0.dev20230628+cu121
- Datasets 2.14.5
- Tokenizers 0.14.0
|
TheBloke/TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ
|
TheBloke
| 2023-10-03T17:12:08Z | 100 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-480k-1T",
"base_model:quantized:TinyLlama/TinyLlama-1.1B-intermediate-step-480k-1T",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-10-03T17:06:23Z |
---
base_model: PY007/TinyLlama-1.1B-intermediate-step-480k-1T
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
inference: false
language:
- en
license: apache-2.0
model_creator: Zhang Peiyuan
model_name: TinyLlama 1.1B Intermediate Step 480K 1T
model_type: tinyllama
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# TinyLlama 1.1B Intermediate Step 480K 1T - GPTQ
- Model creator: [Zhang Peiyuan](https://huggingface.co/PY007)
- Original model: [TinyLlama 1.1B Intermediate Step 480K 1T](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-480k-1T)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Zhang Peiyuan's TinyLlama 1.1B Intermediate Step 480K 1T](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-480k-1T).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-480k-1T-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-480k-1T-GGUF)
* [Zhang Peiyuan's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-480k-1T)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 0.77 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 0.82 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 1.23 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 1.26 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 1.32 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 0.79 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ`:
```shell
mkdir TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ
huggingface-cli download TheBloke/TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ --local-dir TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ
huggingface-cli download TheBloke/TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ --local-dir TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/TinyLlama-1.1B-intermediate-step-480k-1T-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Zhang Peiyuan's TinyLlama 1.1B Intermediate Step 480K 1T
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
<div align="center">
<img src="./TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is an intermediate checkpoint with 480K steps and 1007B tokens.
#### How to use
You will need the transformers>=4.31
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "PY007/TinyLlama-1.1B-intermediate-step-240k-503b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.',
do_sample=True,
top_k=10,
num_return_sequences=1,
repetition_penalty=1.5,
eos_token_id=tokenizer.eos_token_id,
max_length=500,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
|
TheBloke/em_german_mistral_v01-GPTQ
|
TheBloke
| 2023-10-03T17:05:54Z | 17 | 5 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"pytorch",
"llama",
"llama-2",
"german",
"deutsch",
"de",
"base_model:jphme/em_german_mistral_v01",
"base_model:quantized:jphme/em_german_mistral_v01",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-10-03T16:25:06Z |
---
base_model: jphme/em_german_mistral_v01
inference: false
language:
- de
library_name: transformers
license: apache-2.0
model_creator: Jan Philipp Harries
model_name: EM German Mistral v01
model_type: mistral
pipeline_tag: text-generation
prompt_template: 'Du bist ein hilfreicher Assistent. USER: {prompt} ASSISTANT:
'
quantized_by: TheBloke
tags:
- mistral
- pytorch
- llama
- llama-2
- german
- deutsch
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# EM German Mistral v01 - GPTQ
- Model creator: [Jan Philipp Harries](https://huggingface.co/jphme)
- Original model: [EM German Mistral v01](https://huggingface.co/jphme/em_german_mistral_v01)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Jan Philipp Harries's EM German Mistral v01](https://huggingface.co/jphme/em_german_mistral_v01).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/em_german_mistral_v01-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/em_german_mistral_v01-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/em_german_mistral_v01-GGUF)
* [Jan Philipp Harries's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jphme/em_german_mistral_v01)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: EmGerman
```
Du bist ein hilfreicher Assistent. USER: {prompt} ASSISTANT:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/em_german_mistral_v01-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 8192 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/em_german_mistral_v01-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 8192 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/em_german_mistral_v01-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 8192 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/em_german_mistral_v01-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 8192 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/em_german_mistral_v01-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 8192 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/em_german_mistral_v01-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad) | 8192 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/em_german_mistral_v01-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/em_german_mistral_v01-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `em_german_mistral_v01-GPTQ`:
```shell
mkdir em_german_mistral_v01-GPTQ
huggingface-cli download TheBloke/em_german_mistral_v01-GPTQ --local-dir em_german_mistral_v01-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir em_german_mistral_v01-GPTQ
huggingface-cli download TheBloke/em_german_mistral_v01-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir em_german_mistral_v01-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir em_german_mistral_v01-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/em_german_mistral_v01-GPTQ --local-dir em_german_mistral_v01-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/em_german_mistral_v01-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/em_german_mistral_v01-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/em_german_mistral_v01-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `em_german_mistral_v01-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/em_german_mistral_v01-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Du bist ein hilfreicher Assistent. USER: {prompt} ASSISTANT:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/em_german_mistral_v01-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''Du bist ein hilfreicher Assistent. USER: {prompt} ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Jan Philipp Harries's EM German Mistral v01

# EM German Mistral 7b (Alpha)
([Zur deutschen Version](#deutsch-em-german))
**EM German (v01)** is an experimental llama2 and mistral-based model family, finetuned on a large dataset of various instructions in German language. The models are optimized for German text, providing proficiency in understanding, generating, and interacting with German language content.
**This is a very first experimental version of Mistrals 7b model**. This is still based on the HF LLama Model and not on the Mistral implementation, lacking features like GQA and sliding windows - use at your own discretion.
For more mature models, please look at the llama-2 based models of the EM family (see links below).
# Links & Demos
We will publish further instructions, updates and code-snippets in the project's [Github-Repo](https://github.com/jphme/EM_German).
## Model Links
| Base Model | HF | GPTQ | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| [Llama2](https://huggingface.co/meta-llama/Llama-2-7b-hf) 7b | [Link](https://huggingface.co/jphme/em_german_7b_v01) | [Link](https://huggingface.co/jphme/em_german_7b_v01_gptq) | [Link](https://huggingface.co/jphme/em_german_7b_v01_gguf) | soon |
| [Llama2](https://huggingface.co/meta-llama/Llama-2-13b-hf) 13b | [Link](https://huggingface.co/jphme/em_german_13b_v01) | [Link](https://huggingface.co/jphme/em_german_13b_v01_gptq) | soon | soon |
| [Llama2](https://huggingface.co/meta-llama/Llama-2-70b-hf) 70b | [Link](https://huggingface.co/jphme/em_german_70b_v01) | [Link](https://huggingface.co/jphme/em_german_70b_v01_gptq) | [Link](https://huggingface.co/jphme/em_german_70b_v01_gguf) | soon |
| [Mistral 7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [Link](https://huggingface.co/jphme/em_german_mistral_v01) | soon | soon | tbc |
| [LeoLm 7b](https://huggingface.co/LeoLM/leo-hessianai-7b) | soon | soon | soon | tbc |
| [LeoLM 13b](https://huggingface.co/LeoLM/leo-hessianai-7b) | soon | soon | soon | tbc |
## Demos:
You can use some of the models with **free** google Colab instances (e.g. the 7bn model in 8bit or the 13b model with GPTQ):
* [Example Colab Notebook for 13b with GPTQ](https://colab.research.google.com/drive/1IJfJdVwGkfe5MYOqHptystR3FBeEUdGn?usp=sharing)
* [Example Colab Notebook for 7b with 8bit-Loading](https://colab.research.google.com/drive/1bsv6vkLM4AlCpSyXA6ol9P32zxZmf7Zu?usp=sharing)
* Further Examples for LLM apps will follow soon...
# Prompt Format
This model follows the Vicuna format without linebreaks (but should work with linebreaks as well). The format is as follows:
```
Du bist ein hilfreicher Assistent. USER: <instruction> ASSISTANT:
```
You can swap the standard system prompt for a better suited one (see below for RAG-tasks).
# Results
Attached a few example outputs. I also developed a few small German benchmarks (based on the Eleuther LM Evaluation Harness), which I will publish soon.
## Example Output
Prompt:
```
Du bist ein hilfreicher Assistent USER: Ein Buch hat 500 Seiten. Jede Seite enthält 300 Wörter. Wie viele Wörter sind im gesamten Buch enthalten? ASSISTANT:
```
Response EM 7b v01
```
Die Gesamtzahl der Wörter in dem Buch kann berechnet werden, indem man die Anzahl der Wörter pro Seite mit der Gesamtzahl der Seiten multipliziert.
Gesamtzahl der Wörter = Anzahl der Wörter pro Seite * Gesamtzahl der Seiten = 300 Wörter pro Seite * 500 Seiten = 150.000 Wörter
Das Buch enthält also insgesamt 150.000 Wörter.
```
Prompt:
```
Du bist ein hilfreicher Assistent USER: Erläutere die Grundprinzipien der evolutionären Biologie und Genetik, indem du den Prozess der natürlichen Selektion, die Rolle von Mutationen und genetischer Drift sowie das Konzept der adaptiven Radiation erklärst. ASSISTANT:
```
Response EM 7b v01
```
Die Evolutionäre Biologie und Genetik sind zwei intim verwandte Disziplinen, die sich mit dem Studium des Verhaltens und der Entwicklung von Organismen in der Natur befassen. Sie untersuchen, wie Arten sich im Laufe der Zeit entwickeln und wie diese Veränderungen genetisch bedingt sind.
1. Natürliche Selektion: Die natürliche Selektion ist ein grundlegendes Prinzip in der Evolutionäre Biologie. Es besagt, dass Individuen innerhalb einer Spezies unterschiedliche Fähigkeiten oder Merkmale aufweisen können. Diese Merkmale beeinflussen ihre Überlebens- und Fortpflanzungsrate. Diejenigen mit Vorteilen haben eine höhere Wahrscheinlichkeit, ihre Gene an zukünftige Generationen weiterzugeben und so die Art im Laufe der Zeit zu verändern.
2. Rolle von Mutationen: Mutationen sind Veränderungen im Genom eines Organismus, die entweder durch zufällige Fehler während der Replikation oder durch Exposition gegenüber Strahlung verursacht werden. Diese Veränderungen können dazu führen, dass ein Individuum neue Eigenschaften aufweist, die es im Kampf ums Überleben vorteilhaft machen könnten. Diese Mutationen werden dann von der natürlichen Selektion beeinflusst und können so zur Entwicklung neuer Arten beitragen.
3. Genetische Drift: Genetische Drift ist ein zufälliger Prozess, der durch die Verteilung von Allelen in einer Population stattfindet. Wenn eine Population kleiner wird oder wenn die Überlebens- und Fortpflanzungsraten zwischen den Individuen variieren, kann dies dazu führen, dass bestimmte Gene stärker oder weniger häufig werden. Dies kann dazu führen, dass die Population im Laufe der Zeit unterschiedliche Merkmale aufweist.
4. Adaptive Radiation: Adaptive Radiation ist ein Prozess, bei dem eine Art mehrere verschiedene Linien auslässt, die sich auf verschiedene Weise an ihre Umgebung anpassen. Dies geschieht normalerweise, wenn die Umwelt sich ändert und die Art neue Fähigkeiten benötigt, um überleben zu können. Die verschiedenen Linien können dann unterschiedliche Eigenschaften entwickeln, die ihnen helfen, in dieser neuen Umgebung zu überleben.
```
## Factual Retrieval / RAG
EM Models offer a special format for Factual Retrieval/RAG that is optimized for minimizing hallucinations. We use the same format as Airoboros for this:
### Example
Prompt
```
Du bist ein hilfreicher Assistent. Für die folgende Aufgabe stehen dir zwischen den tags BEGININPUT und ENDINPUT mehrere Quellen zur Verfügung. Metadaten zu den einzelnen Quellen wie Autor, URL o.ä. sind zwischen BEGINCONTEXT und ENDCONTEXT zu finden, danach folgt der Text der Quelle. Die eigentliche Aufgabe oder Frage ist zwischen BEGININSTRUCTION und ENDINCSTRUCTION zu finden. Beantworte diese wortwörtlich mit einem Zitat aus den Quellen. Sollten diese keine Antwort enthalten, antworte, dass auf Basis der gegebenen Informationen keine Antwort möglich ist! USER: BEGININPUT
BEGINCONTEXT
Url: https://www.jph.me
ENDCONTEXT
Das Wetter in Düsseldorf wird heute schön und sonnig!
ENDINPUT
BEGININSTRUCTION Was ist 1+1? ENDINSTRUCTION ASSISTANT:
```
Model response:
```
Mit den gegebenen Informationen ist diese Frage nicht zu beantworten.
```
Note that the model would normally always answer with the correct response to a trivial question like this but is trained to deny an answer, if there is no source between the BEGININPUT and ENDINPUT that provides a grounding.
If we keep the same Context/Input but ask instead:
```
BEGININSTRUCTION Wie wird das Wetter heute in Düsseldorf? Gebe die Quelle mit an! ENDINSTRUCTION ASSISTANT:
```
We get the correct answer with the source:
```
Das Wetter in Düsseldorf wird heute schön und sonnig! (Quelle: Url: https://www.jph.me)
```
# Evaluation
I will publish some custom German benchmarks and the results for them soon.
# Dataset
The dataset is a mixture of multi-turn chats, retrieval instructions and synthetically generated instructions spawning many topics and applications.
## Use with tranformers
Unchanged from the base model:
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
# Hugging Face model_path
model_path = 'jphme/em_german_7b_v01'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
def ask_model(instruction, system='Du bist ein hilfreicher Assistent.'):
prompt=f"{system} USER: {instruction} ASSISTANT:"
input_tokens=tokenizer(prompt, return_tensors="pt").to(model.device)
output_tokens=model.generate(**input_tokens, max_new_tokens=200)[0]
answer=tokenizer.decode(output_tokens, skip_special_tokens=True)
return answer
print(ask_model("Nenne mir 10 gute Gründe dafür, heute Sport zu machen!"))
```
# Limitations & Biases
This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
# Acknowledgements:
Many thanks to [winglian/caseus](https://huggingface.co/winglian) for his great work on Axolotl which I used to train the EM mdoels. I am also grateful to [Jon Durbin](https://huggingface.co/jondurbin) and his [Airoboros](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1) models and code from which I borrowed many ideas and code snippets.
The 70b model was trained with support of the [OVH Cloud Startup Program](https://startup.ovhcloud.com/en/).
# Contact
I you are interested in customized LLMs for business applications, please get in contact with me via [my website](https://www.jph.me). I am also always happy about suggestions and feedback.
*PS: I am also still searching for a Co-Founder*.
# Disclaimer:
The license on this model does not constitute legal advice. I am not responsible for the actions of third parties who use this model.
This model should only be used for research purposes. The original Llama2 license applies and is distributed with the model files.
-------------
-------------
-------------
# (Deutsch) EM German
**EM German (v01)** ist eine experimentelle, Llama2-basierte KI-Modellreihe, die in deutscher Sprache fine-getuned wurde.
Die Modelle sind für deutschen Text optimiert und können Inhalte in deutscher Sprache verstehen und generieren.
**Dieses 7b-Modell wurde zusätzlich mit >3 Milliarden Token deutscher Texte vortrainiert**.
# Links & Demos
Im [Github-Repo](https://github.com/jphme/EM_German) Repo werde ich weitere Beispiele und Code-Snippets veröffentlichen.
## Model Links
| Base Model | HF | GPTQ | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| [Llama2](https://huggingface.co/meta-llama/Llama-2-7b-hf) 7b | [Link](https://huggingface.co/jphme/em_german_7b_v01) | [Link](https://huggingface.co/jphme/em_german_7b_v01_gptq) | [Link](https://huggingface.co/jphme/em_german_7b_v01_gguf) | soon |
| [Llama2](https://huggingface.co/meta-llama/Llama-2-13b-hf) 13b | [Link](https://huggingface.co/jphme/em_german_13b_v01) | [Link](https://huggingface.co/jphme/em_german_13b_v01_gptq) | soon | soon |
| [Llama2](https://huggingface.co/meta-llama/Llama-2-70b-hf) 70b | [Link](https://huggingface.co/jphme/em_german_70b_v01) | [Link](https://huggingface.co/jphme/em_german_70b_v01_gptq) | [Link](https://huggingface.co/jphme/em_german_70b_v01_gguf) | soon |
| [Mistral 7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [Link](https://huggingface.co/jphme/em_german_mistral_v01) | soon | soon | tbc |
| [LeoLm 7b](https://huggingface.co/LeoLM/leo-hessianai-7b) | soon | soon | soon | tbc |
| [LeoLM 13b](https://huggingface.co/LeoLM/leo-hessianai-7b) | soon | soon | soon | tbc |
## Colab:
Einige der Modelle können mit kostenlosen Google Colab-Instanzen verwendet werden (z.B. das 7b-Modell in 8-Bit oder das 13b-Modell mit GPTQ):
* [Beispiel-Notebook 13b mit GPTQ](https://colab.research.google.com/drive/1IJfJdVwGkfe5MYOqHptystR3FBeEUdGn?usp=sharing)
* [Beispiel-Notebook 7b mit 8-Bit](https://colab.research.google.com/drive/1bsv6vkLM4AlCpSyXA6ol9P32zxZmf7Zu?usp=sharing)
* Weitere Beispiele für LLM-Apps folgen in Kürze...
# Eingabeformat
Dieses Modell folgt dem Vicuna-Format ohne Zeilenumbrüche (sollte aber auch mit Zeilenumbrüchen funktionieren). Das Format lautet wie folgt:
```
Du bist ein hilfreicher Assistent. USER: <Anweisung> ASSISTANT:
```
Sie können das Standard-Systemprompt aber auch ändern (siehe unten für ein Beispiel bei Aufgaben mit externen Datenbanken).
# Ergebnisse
Anbei einige Beispiele für Modell-Outputs. Wir werden demnächst einige kleine deutsche Benchmarks veröffentlichen (basierend auf dem Eleuther LM Evaluation Harness), die wir bei diesen Modellen bereits getstet haben.
## Beispiel-Ausgaben
Prompt:
```
Du bist ein hilfreicher Assistent USER: Ein Buch hat 500 Seiten. Jede Seite enthält 300 Wörter. Wie viele Wörter sind im gesamten Buch enthalten? ASSISTANT:
```
Antwort EM 7b v01
```
Die Gesamtzahl der Wörter in dem Buch kann berechnet werden, indem man die Anzahl der Wörter pro Seite mit der Gesamtzahl der Seiten multipliziert.
Gesamtzahl der Wörter = Anzahl der Wörter pro Seite * Gesamtzahl der Seiten = 300 Wörter pro Seite * 500 Seiten = 150.000 Wörter
Das Buch enthält also insgesamt 150.000 Wörter.
```
Prompt:
```
Du bist ein hilfreicher Assistent USER: Erläutere die Grundprinzipien der evolutionären Biologie und Genetik, indem du den Prozess der natürlichen Selektion, die Rolle von Mutationen und genetischer Drift sowie das Konzept der adaptiven Radiation erklärst. ASSISTANT:
```
Antwort EM 7b v01:
```
Die Evolutionäre Biologie und Genetik sind zwei intim verwandte Disziplinen, die sich mit dem Studium des Verhaltens und der Entwicklung von Organismen in der Natur befassen. Sie untersuchen, wie Arten sich im Laufe der Zeit entwickeln und wie diese Veränderungen genetisch bedingt sind.
1. Natürliche Selektion: Die natürliche Selektion ist ein grundlegendes Prinzip in der Evolutionäre Biologie. Es besagt, dass Individuen innerhalb einer Spezies unterschiedliche Fähigkeiten oder Merkmale aufweisen können. Diese Merkmale beeinflussen ihre Überlebens- und Fortpflanzungsrate. Diejenigen mit Vorteilen haben eine höhere Wahrscheinlichkeit, ihre Gene an zukünftige Generationen weiterzugeben und so die Art im Laufe der Zeit zu verändern.
2. Rolle von Mutationen: Mutationen sind Veränderungen im Genom eines Organismus, die entweder durch zufällige Fehler während der Replikation oder durch Exposition gegenüber Strahlung verursacht werden. Diese Veränderungen können dazu führen, dass ein Individuum neue Eigenschaften aufweist, die es im Kampf ums Überleben vorteilhaft machen könnten. Diese Mutationen werden dann von der natürlichen Selektion beeinflusst und können so zur Entwicklung neuer Arten beitragen.
3. Genetische Drift: Genetische Drift ist ein zufälliger Prozess, der durch die Verteilung von Allelen in einer Population stattfindet. Wenn eine Population kleiner wird oder wenn die Überlebens- und Fortpflanzungsraten zwischen den Individuen variieren, kann dies dazu führen, dass bestimmte Gene stärker oder weniger häufig werden. Dies kann dazu führen, dass die Population im Laufe der Zeit unterschiedliche Merkmale aufweist.
4. Adaptive Radiation: Adaptive Radiation ist ein Prozess, bei dem eine Art mehrere verschiedene Linien auslässt, die sich auf verschiedene Weise an ihre Umgebung anpassen. Dies geschieht normalerweise, wenn die Umwelt sich ändert und die Art neue Fähigkeiten benötigt, um überleben zu können. Die verschiedenen Linien können dann unterschiedliche Eigenschaften entwickeln, die ihnen helfen, in dieser neuen Umgebung zu überleben.
```
## Sachliche Abfrage / RAG
EM-Modelle bieten ein spezielles Format für RAG (also die Verwendung mit Informationen aus zusätzlichen externen Quellen wie Datenbanken oder Websiten), das für die Minimierung von Halluzinationen und Falscinformationen optimiert ist. Wir verwenden dasselbe Format wie Airoboros dafür.
### Beispiel
Prompt
```
Du bist ein hilfreicher Assistent. Für die folgende Aufgabe stehen dir zwischen den tags BEGININPUT und ENDINPUT mehrere Quellen zur Verfügung. Metadaten zu den einzelnen Quellen wie Autor, URL o.ä. sind zwischen BEGINCONTEXT und ENDCONTEXT zu finden, danach folgt der Text der Quelle. Die eigentliche Aufgabe oder Frage ist zwischen BEGININSTRUCTION und ENDINCSTRUCTION zu finden. Beantworte diese wortwörtlich mit einem Zitat aus den Quellen. Sollten diese keine Antwort enthalten, antworte, dass auf Basis der gegebenen Informationen keine Antwort möglich ist! USER: BEGININPUT
BEGINCONTEXT
Url: https://www.jph.me
ENDCONTEXT
Das Wetter in Düsseldorf wird heute schön und sonnig!
ENDINPUT
BEGININSTRUCTION Was ist 1+1? ENDINSTRUCTION ASSISTANT:
```
Modellantwort:
```
Mit den gegebenen Informationen ist diese Frage nicht zu beantworten.
```
Hier würde das Modell normalerweise die (triviale) Antwort (2) geben, ist aber trainiert, keine Antwort zu geben, wenn es keine Quelle zwischen BEGININPUT und ENDINPUT gibt, die eine belegbare Antwort liefert.
Wenn wir anstattdessen fragen:
```
BEGININSTRUCTION Wie wird das Wetter heute in Düsseldorf? Gebe die Quelle mit an! ENDINSTRUCTION ASSISTANT:
```
Bekommen wir die korrekte Antwort mit Quellenangabe
```
Das Wetter in Düsseldorf wird heute schön und sonnig! (Quelle: Url: https://www.jph.me)
```
# Evaluation
coming soon
# Datensatz
Der Datensatz ist eine Mischung aus Multi-Turn-Chats, RAG Instructions und synthetisch generierten Finetuning-Daten, die viele Themen und Anwendungen abdecken.
# Einschränkungen & Vorurteile
Dieses Modell kann sachlich falsche Ausgaben produzieren und sollte nur zu Forschubngszwecken verwendet werden.
# Danksagungen:
Vielen Dank an [winglian/caseus](https://huggingface.co/winglian) für seine großartige Arbeit an Axolotl, die ich zur Schulung der EM-Modelle verwendet habe. Ich bin auch [Jon Durbin](https://huggingface.co/jondurbin) für seine Arioboros Modelle und Code-Schnipsel dankbar, von denen ich viele Ideen nutzen konnte.
Das 70b-Modell wurde mit Unterstützung des [OVH Cloud Startup Program](https://startup.ovhcloud.com/en/) trainiert.
# Kontakt
Wenn Sie an customized LLMs für geschäftliche Anwendungen interessiert sind, kontaktieren Sie mich bitte über [meine Website](https://www.jph.me). Ich freue mich auch immer über Anregungen und Feedback zu meinen Modellen.
*PS: Ich suche auch immer noch einen Co-Founder für unser Startup, das sich noch im Stealth-Modus befindet.*
# Haftungsausschluss:
Ich bin nicht verantwortlich für die Handlungen Dritter, die dieses Modell verwenden. Dieses Modell sollte nur für Forschungszwecke verwendet werden. Die ursprüngliche Llama2-Lizenz gilt und wird mit den Modell-Dateien verteilt.
|
jackoyoungblood/speecht5_finetuned_voxpopuli_nl2
|
jackoyoungblood
| 2023-10-03T16:58:57Z | 82 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-10-03T14:58:35Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- text-to-speech
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl2
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5247 | 4.3 | 1000 | 0.4807 |
| 0.4971 | 8.61 | 2000 | 0.4652 |
| 0.4944 | 12.91 | 3000 | 0.4625 |
| 0.4913 | 17.21 | 4000 | 0.4599 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
NASA-AIML/MIKA_BERT_FMEA_NER
|
NASA-AIML
| 2023-10-03T16:37:10Z | 106 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-12T23:52:41Z |
---
license: other
language:
- en
pipeline_tag: token-classification
tags:
- token-classification
- NER
library_name: transformers
widget:
- text: "While flying a fire, the UAS experienced an issue of unknown sorts and crashed to the ground. From the people watching the aircraft near the fire, they seem to think it was some sort of motor failure due to no more noise coming from the aircraft and it falling straight to the ground."
example_title: "Example 1"
- text: "During a pre-flight engine run-up, a battery hatch cover disengaged from the fuselage and hit one of the vertical takeoff and landing {VTOL} propellers. The motor failsafe activated and the motors shut down."
example_title: "Example 2"
- text: "UAS was climbing to 11,000 ft. msl on a reconnaissance mission when it experienced a rapid and uncommanded descent. The pilot took no action but monitored instruments until the aircraft regained a stable profile. "
example_title: "Example 3"
---
# Manager for Intelligent Knowledge Access (MIKA)
## Custom Named-Entity Recognition (NER) for Failure Modes and Effects Analysis (FMEA)
base-bert-uncased model first further pre-trained then fine-tuned for custom NER to extract failure-relevant entities from incident and accident reports.
The model was trained on manually annotated NASA LLIS reports and evaluated on SAFECOM reports.
NER model training was for 4 epochs with:`BertForTokenClassification.from_pretrained` , `learning_rate=2e-5`, ` weight_decay=0.01,`
The model was trained to identify the following long-tailed entities:
- CAU: failure cause
- MOD: failure mode
- EFF: failure effect
- CON: control process
- REC: recommendations
Performace:
|Entity | Precision |Recall | F-1 | Support|
|-------|-----------|-------|-----|--------|
|CAU | 0.31 | 0.19 | 0.23 | 1634 |
|CON | 0.49 | 0.34 | 0.40 | 3859 |
|EFF | 0.45 | 0.20 | 0.28 | 1959 |
|MOD | 0.19 | 0.52 | 0.28 | 594 |
|REC | 0.30 | 0.59 | 0.40 | 954 |
|Average | 0.41 | 0.32 | 0.33 | 9000 |
More infomation on training data, evaluation, and intended use can be found in the original publication
Citation:
S. R. Andrade and H. S. Walsh, "What Went Wrong: A Survey of Wildfire UAS Mishaps through Named Entity Recognition," 2022 IEEE/AIAA 41st Digital Avionics Systems Conference (DASC), Portsmouth, VA, USA, 2022, pp. 1-10, doi: 10.1109/DASC55683.2022.9925798.
https://ieeexplore.ieee.org/abstract/document/9925798
* * * * * * * * * * * * * *
Notices:
Copyright © 2023 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. All Rights Reserved.
Disclaimers
No Warranty: THE SUBJECT SOFTWARE IS PROVIDED "AS IS" WITHOUT ANY WARRANTY OF ANY KIND, EITHER EXPRESSED, IMPLIED, OR STATUTORY, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTY THAT THE SUBJECT SOFTWARE WILL CONFORM TO SPECIFICATIONS, ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR FREEDOM FROM INFRINGEMENT, ANY WARRANTY THAT THE SUBJECT SOFTWARE WILL BE ERROR FREE, OR ANY WARRANTY THAT DOCUMENTATION, IF PROVIDED, WILL CONFORM TO THE SUBJECT SOFTWARE. THIS AGREEMENT DOES NOT, IN ANY MANNER, CONSTITUTE AN ENDORSEMENT BY GOVERNMENT AGENCY OR ANY PRIOR RECIPIENT OF ANY RESULTS, RESULTING DESIGNS, HARDWARE, SOFTWARE PRODUCTS OR ANY OTHER APPLICATIONS RESULTING FROM USE OF THE SUBJECT SOFTWARE. FURTHER, GOVERNMENT AGENCY DISCLAIMS ALL WARRANTIES AND LIABILITIES REGARDING THIRD-PARTY SOFTWARE, IF PRESENT IN THE ORIGINAL SOFTWARE, AND DISTRIBUTES IT "AS IS."
Waiver and Indemnity: RECIPIENT AGREES TO WAIVE ANY AND ALL CLAIMS AGAINST THE UNITED STATES GOVERNMENT, ITS CONTRACTORS AND SUBCONTRACTORS, AS WELL AS ANY PRIOR RECIPIENT. IF RECIPIENT'S USE OF THE SUBJECT SOFTWARE RESULTS IN ANY LIABILITIES, DEMANDS, DAMAGES, EXPENSES OR LOSSES ARISING FROM SUCH USE, INCLUDING ANY DAMAGES FROM PRODUCTS BASED ON, OR RESULTING FROM, RECIPIENT'S USE OF THE SUBJECT SOFTWARE, RECIPIENT SHALL INDEMNIFY AND HOLD HARMLESS THE UNITED STATES GOVERNMENT, ITS CONTRACTORS AND SUBCONTRACTORS, AS WELL AS ANY PRIOR RECIPIENT, TO THE EXTENT PERMITTED BY LAW. RECIPIENT'S SOLE REMEDY FOR ANY SUCH MATTER SHALL BE THE IMMEDIATE, UNILATERAL TERMINATION OF THIS AGREEMENT.
* * * * * * * * * * * * * *
|
MorganDustyProd/lololoshka
|
MorganDustyProd
| 2023-10-03T16:36:42Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-10-03T16:35:26Z |
---
license: cc-by-nc-sa-4.0
---
|
vineetsharma/qlora-adapter-Llama-2-7b-hf-databricks-dolly-15k
|
vineetsharma
| 2023-10-03T16:36:19Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-10-03T11:24:15Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: qlora-adapter-Llama-2-7b-hf-databricks-dolly-15k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qlora-adapter-Llama-2-7b-hf-databricks-dolly-15k
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
Trained on RTX A5000 - 24GB GPU. The training took 3 hours 31 mins on the datasets with 12008 train samples and 1501 validation samples
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1584 | 0.08 | 1000 | 1.1782 |
| 1.0667 | 0.17 | 2000 | 1.1710 |
| 1.0662 | 0.25 | 3000 | 1.1599 |
| 1.0517 | 0.33 | 4000 | 1.1569 |
| 1.0479 | 0.42 | 5000 | 1.1502 |
| 1.0516 | 0.5 | 6000 | 1.1441 |
| 1.0612 | 0.58 | 7000 | 1.1397 |
| 1.0235 | 0.67 | 8000 | 1.1361 |
| 1.0259 | 0.75 | 9000 | 1.1339 |
| 1.0485 | 0.83 | 10000 | 1.1320 |
| 1.0406 | 0.92 | 11000 | 1.1314 |
| 1.0393 | 1.0 | 12000 | 1.1313 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
cbellew09/taxi-v1
|
cbellew09
| 2023-10-03T16:34:01Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T16:33:59Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="cbellew09/taxi-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
purabp1249/swin-tiny-patch4-window7-224-finetuned-herbify_plants
|
purabp1249
| 2023-10-03T16:28:23Z | 213 | 0 |
transformers
|
[
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-03T16:04:12Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-herbify_plants
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9868421052631579
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-herbify_plants
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0703
- Accuracy: 0.9868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7849 | 0.99 | 21 | 0.4281 | 0.8553 |
| 0.2498 | 1.98 | 42 | 0.1295 | 0.9737 |
| 0.1387 | 2.96 | 63 | 0.0703 | 0.9868 |
| 0.1039 | 3.95 | 84 | 0.0741 | 0.9737 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cpu
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.