modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-29 12:28:52
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-29 12:25:02
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
pligor/gr7.5b-dolly
|
pligor
| 2023-07-24T20:13:16Z | 12 | 1 |
transformers
|
[
"transformers",
"pytorch",
"xglm",
"text-generation",
"text-generation-inference",
"el",
"dataset:databricks/databricks-dolly-15k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2023-07-22T06:45:06Z |
---
license: apache-2.0
datasets:
- databricks/databricks-dolly-15k
language:
- el
library_name: transformers
tags:
- text-generation-inference
pipeline_tag: text-generation
---
# Model Card for agrimi7.5B-dolly
<!-- Provide a quick summary of what the model is/does. -->
This model is a finetuned (SFT) version of Facbook xglm-7.5B using a machine translated version of the dataset databricks-dolly-15k in Greek language!
The purpose is to demonstrate the ability of the specific pretrained model to adapt to instruction following mode by using a relatively small dataset such as the databricks-dolly-15k.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Andreas Loupasakis](https://github.com/alup)
- **Model type:** Causal Language Model
- **Language(s) (NLP):** Greek (el)
- **License:** Apache-2.0
- **Finetuned from model:** XGLM-7.5B
|
felixb85/ppo-Huggy
|
felixb85
| 2023-07-24T20:07:29Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-24T20:07:23Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: felixb85/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BBHF/llama2-qlora-finetunined-french
|
BBHF
| 2023-07-24T20:07:12Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-24T20:07:07Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
Risco/llama2-qlora-finetunined-french
|
Risco
| 2023-07-24T19:55:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-24T19:55:48Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
nikitakapitan/xml-roberta-base-finetuned-panx-de-fr-it-en
|
nikitakapitan
| 2023-07-24T19:54:47Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-24T19:39:22Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xml-roberta-base-finetuned-panx-de-fr-it-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xml-roberta-base-finetuned-panx-de-fr-it-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1749
- F1: 0.8572
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3433 | 1.0 | 835 | 0.1938 | 0.8093 |
| 0.1544 | 2.0 | 1670 | 0.1675 | 0.8454 |
| 0.1125 | 3.0 | 2505 | 0.1749 | 0.8572 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
idlebg/easter-fusion
|
idlebg
| 2023-07-24T19:54:19Z | 171 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"art",
"text-to-image",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-04-18T00:18:57Z |
---
license: other
tags:
- stable-diffusion
- art
language:
- en
pipeline_tag: text-to-image
---

A fully revamped checkpoint based on the 512dim Lora
and chilloutmix_NiPrunedFp32Fix + deliberate_v2.
Training data: 512 DIM LORA
https://civitai.com/models/41893/lora-eggeaster-fusion
Here is how it looks using the lora on top of models:


More info on the Fusion project is soon to be announced ;)

🤟 🥃

lora weights: TEnc Weight 0.2 UNet Weight 1
Merged the 512dim lora to chillout and deliberate / --ratios 0.99 toward my fusion lora
C is CHillout / d is deliberate
Merged C and D with:
Base_alpha=0.53
Weight_values=0,0.157576195987654,0.28491512345679,0.384765625,0.459876543209877,0.512996720679012,0.546875,0.564260223765432,0.567901234567901,0.560546875,0.544945987654321,0.523847415123457,0.5,0.476152584876543,0.455054012345679,0.439453125,0.432098765432099,0.435739776234568,0.453125,0.487003279320987,0.540123456790124,0.615234375,0.71508487654321,0.842423804012347,1
and
1,0.842423804012346,0.71508487654321,0.615234375,0.540123456790123,0.487003279320988,0.453125,0.435739776234568,0.432098765432099,0.439453125,0.455054012345679,0.476152584876543,0.5,0.523847415123457,0.544945987654321,0.560546875,0.567901234567901,0.564260223765432,0.546875,0.512996720679013,0.459876543209876,0.384765625,0.28491512345679,0.157576195987653,0
both results merged
Weight_values=
1,0.842423804012346,0.71508487654321,0.615234375,0.540123456790123,0.487003279320988,0.453125,0.435739776234568,0.432098765432099,0.439453125,0.455054012345679,0.476152584876543,0.5,0.523847415123457,0.544945987654321,0.560546875,0.567901234567901,0.564260223765432,0.546875,0.512996720679013,0.459876543209876,0.384765625,0.28491512345679,0.157576195987653,0
And voila..
here is Egg_fusion checkpoint with the 512 dim-trained lora

Enjoy your eggs 🤟 🥃
|
musabgultekin/functionary-7b-v1
|
musabgultekin
| 2023-07-24T19:41:19Z | 10 | 13 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-24T14:31:40Z |
---
license: mit
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
---
See: https://github.com/musabgultekin/functionary/
|
diegoruffa3/llama2-qlora-finetunined-french
|
diegoruffa3
| 2023-07-24T19:39:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-24T19:39:41Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
IndianaUniversityDatasetsModels/test-BB3
|
IndianaUniversityDatasetsModels
| 2023-07-24T19:34:04Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"encoder-decoder",
"text2text-generation",
"medical",
"en",
"dataset:IndianaUniversityDatasetsModels/Medical_reports_Splits",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-10T10:30:52Z |
---
license: apache-2.0
datasets:
- IndianaUniversityDatasetsModels/Medical_reports_Splits
language:
- en
metrics:
- rouge
library_name: transformers
tags:
- medical
---
## Inputs and Outputs
- **Expected Input: " [Problems] + Text"
- **Target Output: " [findings] + Text [impression] + Text"
|
jakariamd/opp_115_do_not_track
|
jakariamd
| 2023-07-24T18:59:42Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-24T17:48:54Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: opp_115_do_not_track
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opp_115_do_not_track
This model is a fine-tuned version of [mukund/privbert](https://huggingface.co/mukund/privbert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0010
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 176 | 0.0005 | 1.0 |
| No log | 2.0 | 352 | 0.0010 | 1.0 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
robinhad/open_llama_3b_uk_ggml
|
robinhad
| 2023-07-24T18:54:49Z | 0 | 0 | null |
[
"uk",
"dataset:robinhad/databricks-dolly-15k-uk",
"license:apache-2.0",
"region:us"
] | null | 2023-07-24T18:53:53Z |
---
license: apache-2.0
datasets:
- robinhad/databricks-dolly-15k-uk
language:
- uk
---
|
nikitakapitan/xlm-roberta-base-finetuned-panx-de
|
nikitakapitan
| 2023-07-24T18:49:26Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-16T05:29:21Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1604
- F1: 0.8595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2865 | 1.0 | 715 | 0.1769 | 0.8240 |
| 0.1463 | 2.0 | 1430 | 0.1601 | 0.8420 |
| 0.0937 | 3.0 | 2145 | 0.1604 | 0.8595 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
jasheu/bert-base-uncased-finetuned-imdb
|
jasheu
| 2023-07-24T18:47:11Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-24T17:14:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: bert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.349 | 1.0 | 157 | 0.3184 |
| 0.3365 | 2.0 | 314 | 0.3168 |
| 0.3306 | 3.0 | 471 | 0.3134 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
bluemyu/back
|
bluemyu
| 2023-07-24T18:44:43Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-03T19:11:17Z |
---
license: creativeml-openrail-m
---
|
Tverous/sft-trl-no-claim
|
Tverous
| 2023-07-24T18:38:21Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"generated_from_trainer",
"dataset:anli",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-24T00:36:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- anli
model-index:
- name: sft-trl-no-claim
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-trl-no-claim
This model is a fine-tuned version of [EleutherAI/gpt-j-6b](https://huggingface.co/EleutherAI/gpt-j-6b) on the anli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.1604 | 0.2 | 1000 | 1.8712 |
| 1.0077 | 0.4 | 2000 | 1.4700 |
| 0.8175 | 0.6 | 3000 | 1.0772 |
| 0.7621 | 0.8 | 4000 | 0.7959 |
| 0.3346 | 1.0 | 5000 | 0.7570 |
| 0.2903 | 1.2 | 6000 | 0.6882 |
| 0.2313 | 1.4 | 7000 | 0.6471 |
| 0.3038 | 1.6 | 8000 | 0.6203 |
| 0.2292 | 1.8 | 9000 | 0.6203 |
| 0.193 | 2.0 | 10000 | 0.6203 |
| 0.1198 | 2.2 | 11000 | 0.6203 |
| 0.2185 | 2.4 | 12000 | 0.6203 |
| 0.1038 | 2.6 | 13000 | 0.6203 |
| 0.1469 | 2.8 | 14000 | 0.6203 |
| 0.1857 | 3.0 | 15000 | 0.6203 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jakariamd/opp_115_user_choice_control
|
jakariamd
| 2023-07-24T18:35:50Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-24T18:29:39Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: opp_115_user_choice_control
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opp_115_user_choice_control
This model is a fine-tuned version of [mukund/privbert](https://huggingface.co/mukund/privbert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1533
- Accuracy: 0.9473
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 152 | 0.2217 | 0.9366 |
| No log | 2.0 | 304 | 0.1533 | 0.9473 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
nlpulse/gpt-j-6b-english_quotes
|
nlpulse
| 2023-07-24T18:35:37Z | 25 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"en",
"dataset:Abirate/english_quotes",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T16:43:29Z |
---
license: apache-2.0
datasets:
- Abirate/english_quotes
language:
- en
library_name: transformers
---
# Quantization 4Bits - 4.92 GB GPU memory usage for inference:
** Vide same fine-tuning for Llama2-7B-Chat: [https://huggingface.co/nlpulse/llama2-7b-chat-english_quotes](https://huggingface.co/nlpulse/llama2-7b-chat-english_quotes)
```
$ nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.105.01 Driver Version: 515.105.01 CUDA Version: 11.7 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 1 NVIDIA GeForce ... Off | 00000000:04:00.0 Off | N/A |
| 37% 70C P2 163W / 170W | 4923MiB / 12288MiB | 91% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
```
## Fine-tuning
```
3 epochs, all dataset samples (split=train), 939 steps
1 x GPU NVidia RTX 3060 12GB - max. GPU memory: 7.44 GB
Duration: 1h45min
$ nvidia-smi && free -h
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.105.01 Driver Version: 515.105.01 CUDA Version: 11.7 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 1 NVIDIA GeForce ... Off | 00000000:04:00.0 Off | N/A |
|100% 89C P2 166W / 170W | 7439MiB / 12288MiB | 93% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
total used free shared buff/cache available
Mem: 77Gi 14Gi 23Gi 79Mi 39Gi 62Gi
Swap: 37Gi 0B 37Gi
```
## Inference
```
import os
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
model_path = "nlpulse/gpt-j-6b-english_quotes"
# tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_path)
tokenizer.pad_token = tokenizer.eos_token
# quantization config
quant_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
# model
model = AutoModelForCausalLM.from_pretrained(model_path, quantization_config=quant_config, device_map={"":0})
# inference
device = "cuda"
text_list = ["Ask not what your country", "Be the change that", "You only live once, but", "I'm selfish, impatient and"]
for text in text_list:
inputs = tokenizer(text, return_tensors="pt").to(device)
outputs = model.generate(**inputs, max_new_tokens=60)
print('>> ', text, " => ", tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Requirements
```
pip install -U bitsandbytes
pip install -U git+https://github.com/huggingface/transformers.git
pip install -U git+https://github.com/huggingface/peft.git
pip install -U accelerate
pip install -U datasets
pip install -U scipy
```
## Scripts
[https://github.com/nlpulse-io/sample_codes/tree/main/fine-tuning/peft_quantization_4bits/gptj-6b](https://github.com/nlpulse-io/sample_codes/tree/main/fine-tuning/peft_quantization_4bits/gptj-6b)
## References
[QLoRa: Fine-Tune a Large Language Model on Your GPU](https://towardsdatascience.com/qlora-fine-tune-a-large-language-model-on-your-gpu-27bed5a03e2b)
[Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes)
|
DunnBC22/vit-base-patch16-224-in21k_car_or_motorcycle
|
DunnBC22
| 2023-07-24T18:34:41Z | 181 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-01-07T02:05:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: vit-base-patch16-224-in21k_car_or_motorcycle
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.99375
language:
- en
pipeline_tag: image-classification
---
# vit-base-patch16-224-in21k_car_or_motorcycle
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0301
- Accuracy: 0.9938
- F1: 0.9939
- Recall: 0.9927
- Precision: 0.9951
## Model description
This is a binary classification model to distinguish between images of cars and images of motorcycles.
For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Computer%20Vision/Image%20Classification/Binary%20Classification/Car%20or%20Motorcycle/Car_or_Motorcycle_ViT.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/utkarshsaxenadn/car-vs-bike-classification-dataset
_Sample Images From Dataset:_

## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.6908 | 1.0 | 200 | 0.0372 | 0.99 | 0.9902 | 0.9902 | 0.9902 |
| 0.6908 | 2.0 | 400 | 0.0301 | 0.9938 | 0.9939 | 0.9927 | 0.9951 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1
- Datasets 2.5.2
- Tokenizers 0.12.1
|
DunnBC22/hateBERT-Hate_Offensive_or_Normal_Speech
|
DunnBC22
| 2023-07-24T18:30:54Z | 101 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-03T20:49:11Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hateBERT-Hate_Offensive_or_Normal_Speech
results: []
language:
- en
---
# hateBERT-Hate_Offensive_or_Normal_Speech
This model is a fine-tuned version of [GroNLP/hateBERT](https://huggingface.co/GroNLP/hateBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1655
- Accuracy: 0.9410
- F1
- Weighted: 0.9395
- Micro: 0.9410
- Macro: 0.9351
- Recall
- Weighted: 0.9410
- Micro: 0.9410
- Macro: 0.9273
- Precision
- Weighted: 0.9447
- Micro: 0.9410
- Macro: 0.9510
## Model description
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Multiclass%20Classification/Transformer%20Comparison/Hate%20%26%20Offensive%20Speech%20-%20hateBERT.ipynb
### Associated Models
This project is part of a comparison that included the following models:
- https://huggingface.co/DunnBC22/bert-large-uncased-Hate_Offensive_or_Normal_Speech
- https://huggingface.co/DunnBC22/bert-base-uncased-Hate_Offensive_or_Normal_Speech
- https://huggingface.co/DunnBC22/distilbert-base-uncased-Hate_Offensive_or_Normal_Speech
- https://huggingface.co/DunnBC22/fBERT-Hate_Offensive_or_Normal_Speech
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
The main limitation is the quality of the data source.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/subhajournal/normal-hate-and-offensive-speeches
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted F1 | Micro F1 | Macro F1 | Weighted Recall | Micro Recall | Macro Recall | Weighted Precision | Micro Precision | Macro Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 0.8958 | 1.0 | 39 | 0.6817 | 0.5508 | 0.4792 | 0.5508 | 0.4395 | 0.5508 | 0.5508 | 0.4853 | 0.7547 | 0.5508 | 0.7906 |
| 0.4625 | 2.0 | 78 | 0.2448 | 0.9246 | 0.9230 | 0.9246 | 0.9170 | 0.9246 | 0.9246 | 0.9103 | 0.9263 | 0.9246 | 0.9296 |
| 0.2071 | 3.0 | 117 | 0.1655 | 0.9410 | 0.9395 | 0.9410 | 0.9351 | 0.9410 | 0.9410 | 0.9273 | 0.9447 | 0.9410 | 0.9510 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.12.1
- Datasets 2.9.0
- Tokenizers 0.12.1
|
ntu-spml/distilhubert
|
ntu-spml
| 2023-07-24T18:30:45Z | 2,421 | 31 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"hubert",
"feature-extraction",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2110.01900",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# DistilHuBERT
[DistilHuBERT by NTU Speech Processing & Machine Learning Lab](https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
Paper: [DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT](https://arxiv.org/abs/2110.01900)
Authors: Heng-Jui Chang, Shu-wen Yang, Hung-yi Lee
**Abstract**
Self-supervised speech representation learning methods like wav2vec 2.0 and Hidden-unit BERT (HuBERT) leverage unlabeled speech data for pre-training and offer good representations for numerous speech processing tasks. Despite the success of these methods, they require large memory and high pre-training costs, making them inaccessible for researchers in academia and small companies. Therefore, this paper introduces DistilHuBERT, a novel multi-task learning framework to distill hidden representations from a HuBERT model directly. This method reduces HuBERT's size by 75% and 73% faster while retaining most performance in ten different tasks. Moreover, DistilHuBERT required little training time and data, opening the possibilities of pre-training personal and on-device SSL models for speech.
The original model can be found under https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `HubertForCTC`.
|
voidful/tts_hubert_cluster_bart_base
|
voidful
| 2023-07-24T18:30:35Z | 61 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"audio",
"automatic-speech-recognition",
"speech",
"asr",
"hubert",
"en",
"dataset:librispeech",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech
tags:
- audio
- automatic-speech-recognition
- speech
- asr
- hubert
license: apache-2.0
metrics:
- wer
- cer
---
# voidful/tts_hubert_cluster_bart_base
## Usage
````python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("voidful/tts_hubert_cluster_bart_base")
model = AutoModelForSeq2SeqLM.from_pretrained("voidful/tts_hubert_cluster_bart_base")
````
generate output
```python
gen_output = model.generate(input_ids=tokenizer("going along slushy country roads and speaking to damp audience in drifty school rooms day after day for a fortnight he'll have to put in an appearance at some place of worship on sunday morning and he can come to ask immediately afterwards",return_tensors='pt').input_ids, max_length=1024)
print(tokenizer.decode(gen_output[0], skip_special_tokens=True))
```
## Result
`:vtok402::vtok329::vtok329::vtok75::vtok75::vtok75::vtok44::vtok150::vtok150::vtok222::vtok280::vtok280::vtok138::vtok409::vtok409::vtok409::vtok46::vtok441:`
|
S1X3L4/a2c-PandaReachDense-v2
|
S1X3L4
| 2023-07-24T18:29:45Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-24T18:26:40Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.52 +/- 0.46
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DunnBC22/led-base-16384-text_summarization_data
|
DunnBC22
| 2023-07-24T18:24:45Z | 102 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"generated_from_trainer",
"summarization",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-03-08T05:58:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: led-base-16384-text_summarization_data
results: []
language:
- en
pipeline_tag: summarization
---
# led-base-16384-text_summarization_data
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9531
- Rouge1: 43.3689
- Rouge2: 19.9885
- RougeL: 39.9887
- RougeLsum: 40.0679
- Gen Len: 14.0392
## Model description
This is a text summarization model.
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Text%20Summarization/Text-Summarized%20Data%20-%20Comparison/LED%20-%20Text%20Summarization%20-%204%20Epochs.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/cuitengfeui/textsummarization-data
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | RougeL | RougeLsum | General Length |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.329 | 1.0 | 1197 | 0.9704 | 42.4111 | 19.8995 | 39.4717 | 39.5449 | 14.254 |
| 0.8367 | 2.0 | 2394 | 0.9425 | 43.1141 | 19.6089 | 39.7533 | 39.8298 | 14.1058 |
| 0.735 | 3.0 | 3591 | 0.9421 | 42.8101 | 19.8281 | 39.617 | 39.6751 | 13.7101 |
| 0.6737 | 4.0 | 4788 | 0.9531 | 43.3689 | 19.9885 | 39.9887 | 40.0679 | 14.0392 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.12.1
- Datasets 2.9.0
- Tokenizers 0.12.1
|
mcamara/whisper-tiny-dv
|
mcamara
| 2023-07-24T18:20:33Z | 81 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-24T15:14:39Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-dv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 57.438016528925615
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-dv
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4917
- Wer Ortho: 58.5441
- Wer: 57.4380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 0.0013 | 17.86 | 500 | 1.0739 | 56.0148 | 55.3719 |
| 0.0003 | 35.71 | 1000 | 1.1575 | 54.3492 | 53.6009 |
| 0.0002 | 53.57 | 1500 | 1.2226 | 55.3979 | 54.7226 |
| 0.0001 | 71.43 | 2000 | 1.2711 | 56.6934 | 55.4900 |
| 0.0001 | 89.29 | 2500 | 1.3089 | 56.1999 | 55.1948 |
| 0.0 | 107.14 | 3000 | 1.3487 | 55.4596 | 54.4864 |
| 0.0 | 125.0 | 3500 | 1.3865 | 56.4466 | 55.5490 |
| 0.0 | 142.86 | 4000 | 1.4259 | 58.9759 | 57.6741 |
| 0.0 | 160.71 | 4500 | 1.4563 | 58.2973 | 57.0838 |
| 0.0 | 178.57 | 5000 | 1.4917 | 58.5441 | 57.4380 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ManelSR/ppo-LunarLander-v2
|
ManelSR
| 2023-07-24T18:19:11Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-24T18:16:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 223.37 +/- 16.43
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DunnBC22/vit-base-patch16-224-in21k_vegetables_clf
|
DunnBC22
| 2023-07-24T18:17:32Z | 233 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-01-11T04:04:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: vit-base-patch16-224-in21k_vegetables_clf
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1
language:
- en
pipeline_tag: image-classification
---
# vit-base-patch16-224-in21k_vegetables_clf
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k).
It achieves the following results on the evaluation set:
- Loss: 0.0014
- Accuracy: 1.0
- F1
- Weighted: 1.0
- Micro: 1.0
- Macro: 1.0
- Recall
- Weighted: 1.0
- Micro: 1.0
- Macro: 1.0
- Precision
- Weighted: 1.0
- Micro: 1.0
- Macro: 1.0
## Model description
This is a multiclass image classification model of different vegetables.
For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Computer%20Vision/Image%20Classification/Multiclass%20Classification/Vegetable%20Image%20Classification/Vegetables_ViT.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/misrakahmed/vegetable-image-dataset
_Sample Images From Dataset:_

## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted F1 | Micro F1 | Macro F1 | Weighted Recall | Micro Recall | Macro Recall | Weighted Precision | Micro Precision | Macro Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 0.2079 | 1.0 | 938 | 0.0193 | 0.996 | 0.9960 | 0.996 | 0.9960 | 0.996 | 0.996 | 0.9960 | 0.9960 | 0.996 | 0.9960 |
| 0.0154 | 2.0 | 1876 | 0.0068 | 0.9987 | 0.9987 | 0.9987 | 0.9987 | 0.9987 | 0.9987 | 0.9987 | 0.9987 | 0.9987 | 0.9987 |
| 0.0018 | 3.0 | 2814 | 0.0014 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.12.1
|
ailabturkiye/mazlumkiper
|
ailabturkiye
| 2023-07-24T18:17:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-24T18:13:57Z |
[](discord.gg/ailab)


# Mazlum Kiper (Ses Sanatçısı) - RVC V2 300 Epoch
**Ses sanatçısı Mazlum Kiper'in ses modelidir.**
**Rvc V2 | 23 Dakikalık Dataset | 300 Epoch olarak eğitilmiştir.**
_Dataset ve Train Benim Tarafımdan yapılmıştır.._
__Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.__
## Credits
**Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.**
- Discord: hydragee
- YouTube: CoverLai (https://www.youtube.com/@coverlai)

[](discord.gg/ailab)

|
jakariamd/opp_115_policy_change
|
jakariamd
| 2023-07-24T18:17:18Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-24T18:09:44Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: opp_115_policy_change
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opp_115_policy_change
This model is a fine-tuned version of [mukund/privbert](https://huggingface.co/mukund/privbert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0403
- Accuracy: 0.9920
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 171 | 0.0609 | 0.9890 |
| No log | 2.0 | 342 | 0.0403 | 0.9920 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
afh77788bh/my_awesome_qa_model
|
afh77788bh
| 2023-07-24T18:13:08Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-24T02:37:55Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: afh77788bh/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# afh77788bh/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.5352
- Validation Loss: 2.3334
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.5352 | 2.3334 | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NbAiLab/nb-whisper-base-beta
|
NbAiLab
| 2023-07-24T18:05:55Z | 84 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"onnx",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"asr",
"hf-asr-leaderboard",
"no",
"nb",
"nn",
"en",
"dataset:NbAiLab/ncc_speech",
"dataset:NbAiLab/NST",
"dataset:NbAiLab/NPSC",
"arxiv:2212.04356",
"arxiv:1910.09700",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-23T19:30:09Z |
---
license: cc-by-4.0
language:
- 'no'
- nb
- nn
- en
datasets:
- NbAiLab/ncc_speech
- NbAiLab/NST
- NbAiLab/NPSC
tags:
- audio
- asr
- automatic-speech-recognition
- hf-asr-leaderboard
metrics:
- wer
- cer
library_name: transformers
pipeline_tag: automatic-speech-recognition
widget:
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/1/audio/audio.mp3
example_title: FLEURS sample 1
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/4/audio/audio.mp3
example_title: FLEURS sample 2
---
# NB-Whisper Base (beta)
This is a **_public beta_** of the Norwegian NB-Whisper Base model released by the National Library of Norway. NB-Whisper is a series of models for automatic speech recognition (ASR) and speech translation, building upon the foundation laid by [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). All models are trained on 20,000 hours of labeled data.
<center>
<figure>
<video controls>
<source src="https://huggingface.co/NbAiLab/nb-whisper-small-beta/resolve/main/king.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<figcaption><a href="https://www.royalcourt.no/tale.html?tid=137662&sek=28409&scope=27248" target="_blank">Speech given by His Majesty The King of Norway at the garden party hosted by Their Majesties The King and Queen at the Palace Park on 1 September 2016.</a>Transcribed using the Small model.</figcaption>
</figure>
</center>
## Model Details
NB-Whisper models will be available in five different sizes:
| Model Size | Parameters | Availability |
|------------|------------|--------------|
| tiny | 39M | [NB-Whisper Tiny (beta)](https://huggingface.co/NbAiLab/nb-whisper-tiny-beta) |
| base | 74M | [NB-Whisper Base (beta)](https://huggingface.co/NbAiLab/nb-whisper-base-beta) |
| small | 244M | [NB-Whisper Small (beta)](https://huggingface.co/NbAiLab/nb-whisper-small-beta) |
| medium | 769M | [NB-Whisper Medium (beta)](https://huggingface.co/NbAiLab/nb-whisper-medium-beta) |
| large | 1550M | [NB-Whisper Large (beta)](https://huggingface.co/NbAiLab/nb-whisper-large-beta) |
An official release of NB-Whisper models is planned for the Fall 2023.
Please refer to the OpenAI Whisper model card for more details about the backbone model.
### Model Description
- **Developed by:** [NB AI-Lab](https://ai.nb.no/)
- **Shared by:** [NB AI-Lab](https://ai.nb.no/)
- **Model type:** `whisper`
- **Language(s) (NLP):** Norwegian, Norwegian Bokmål, Norwegian Nynorsk, English
- **License:** [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
- **Finetuned from model:** [openai/whisper-small](https://huggingface.co/openai/whisper-small)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/NbAiLab/nb-whisper/
- **Paper:** _Coming soon_
- **Demo:** http://ai.nb.no/demo/nb-whisper
## Uses
### Direct Use
This is a **_public beta_** release. The models published in this repository are intended for a generalist purpose and are available to third parties.
### Downstream Use
For Norwegian transcriptions we are confident that this public beta will give you State-of-the-Art results compared to currently available Norwegian ASR models of the same size. However, it is still known to show some hallucinations, as well as a tendency to drop part of the transcript from time to time. Please also note that the transcripts are typically not word by word. Spoken language and written language are often very different, and the model aims to "translate" spoken utterances into grammatically correct written sentences. We strongly believe that the best way to understand these models is to try them yourself.
A significant part of the training material comes from TV subtitles. Subtitles often shorten the content to make it easier to read. Typically, non-essential parts of the utterance can be also dropped. In some cases, this is a desired ability, in other cases, this is undesired. The final release of these model will provida a mechanism to control for this beaviour.
## Bias, Risks, and Limitations
This is a public beta that is not intended for production. Production use without adequate assessment of risks and mitigation may be considered irresponsible or harmful. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import pipeline
asr = pipeline(
"automatic-speech-recognition",
"NbAiLab/nb-whisper-base-beta"
)
asr(
"audio.mp3",
generate_kwargs={'task': 'transcribe', 'language': 'no'}
)
# {'text': ' Så mange anga kører seg i så viktig sak, så vi får du kører det tilbake med. Om kabaret gudam i at vi skal hjælge. Kør seg vi gjør en uda? Nei noe skal å abelistera sonvorne skrifer. Det er sak, så kjent det bare handling i samtatsen til bargører. Trudet første lask. På den å først så å køre og en gange samme, og så får vi gjør å vorte vorte vorte når vi kjent dit.'}
```
Timestamps can also be retrieved by passing in the right parameter.
```python
asr(
"audio.mp3",
generate_kwargs={'task': 'transcribe', 'language': 'no'},
return_timestamps=True,
)
# {'text': ' at så mange angar til seg så viktig sak, så vi får jo kjølget klare tilbakemeldingen om hva valget dem gjør at vi skal gjøre. Hva skjer vi gjøre nå da? Nei, nå skal jo administrationen vår skrivferdige sak, så kjem til behandling i samfærdshetshøyvalget, tror det første
# r. Først så kan vi ta og henge dem kjemme, og så får vi gjøre vårt valget når vi kommer dit.',
# 'chunks': [{'timestamp': (0.0, 5.34),
# 'text': ' at så mange angar til seg så viktig sak, så vi får jo kjølget klare tilbakemeldingen om'},
# {'timestamp': (5.34, 8.64),
# 'text': ' hva valget dem gjør at vi skal gjøre.'},
# {'timestamp': (8.64, 10.64), 'text': ' Hva skjer vi gjøre nå da?'},
# {'timestamp': (10.64, 17.44),
# 'text': ' Nei, nå skal jo administrationen vår skrivferdige sak, så kjem til behandling i samfærdshetshøyvalget,'},
# {'timestamp': (17.44, 19.44), 'text': ' tror det første år.'},
# {'timestamp': (19.44, 23.94),
# 'text': ' Først så kan vi ta og henge dem kjemme, og så får vi gjøre vårt valget når vi kommer dit.'}]}
```
## Training Data
Trained data comes from Språkbanken and the digital collection at the National Library of Norway. Training data includes:
- NST Norwegian ASR Database (16 kHz), and its corresponding dataset
- Transcribed speeches from the Norwegian Parliament produced by Språkbanken
- TV broadcast (NRK) subtitles (NLN digital collection)
- Audiobooks (NLN digital collection)
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** TPUv4
- **Hours used:** 1,536
- **Cloud Provider:** Google Cloud
- **Compute Region:** `us-central1`
- **Carbon Emitted:** Total emissions are estimated to be 247.77 kgCO₂ of which 100 percents were directly offset by the cloud provider.
#### Software
The model is trained using Jax/Flax. The final model is converted to Pytorch, Tensorflow, whisper.cpp and ONXX. Please tell us if you would like future models to be converted to other format.
## Citation & Contributors
The development of this model was part of the contributors' professional roles at the National Library of Norway, under the _NoSTram_ project led by _Per Egil Kummervold (PEK)_. The Jax code, dataset loaders, and training scripts were collectively designed by _Javier de la Rosa (JdlR)_, _Freddy Wetjen (FW)_, _Rolv-Arild Braaten (RAB)_, and _PEK_. Primary dataset curation was handled by _FW_, _RAB_, and _PEK_, while _JdlR_ and _PEK_ crafted the documentation. The project was completed under the umbrella of AiLab, directed by _Svein Arne Brygfjeld_.
All contributors played a part in shaping the optimal training strategy for the Norwegian ASR model based on the Whisper architecture.
_A paper detailing our process and findings is underway!_
## Acknowledgements
Thanks to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for supporting this project with extensive training resources. Thanks to Google Cloud for supporting us with credits for translating large parts of the corpus. A special thanks to [Sanchit Ghandi](https://huggingface.co/sanchit-gandhi) for providing thorough technical advice in debugging and with the work of getting this to train on Google TPUs. A special thanks to Per Erik Solberg at Språkbanken for the collaboration with regard to the Stortinget corpus.
## Contact
We are releasing this ASR Whisper model as a public beta to gather constructive feedback on its performance. Please do not hesitate to contact us with any experiences, insights, or suggestions that you may have. Your input is invaluable in helping us to improve the model and ensure that it effectively serves the needs of users. Whether you have technical concerns, usability suggestions, or ideas for future enhancements, we welcome your input. Thank you for participating in this critical stage of our model's development.
If you intend to incorporate this model into your research, we kindly request that you reach out to us. We can provide you with the most current status of our upcoming paper, which you can cite to acknowledge and provide context for the work done on this model.
Please use this email as the main contact point, it is read by the entire team: <a rel="noopener nofollow" href="mailto:[email protected]">[email protected]</a>
|
jvadlamudi2/convnext-tiny-224-jvadlamudi2
|
jvadlamudi2
| 2023-07-24T18:05:38Z | 193 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"convnext",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnext-tiny-224",
"base_model:finetune:facebook/convnext-tiny-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-24T17:51:37Z |
---
license: apache-2.0
base_model: facebook/convnext-tiny-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: convnext-tiny-224-jvadlamudi2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-jvadlamudi2
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5780
- Accuracy: 0.7946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 0.5882 | 0.8036 |
| 0.6213 | 2.0 | 14 | 0.5821 | 0.7857 |
| 0.6123 | 3.0 | 21 | 0.5780 | 0.7946 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
NbAiLab/nb-whisper-large-beta
|
NbAiLab
| 2023-07-24T18:05:01Z | 42,970 | 8 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"asr",
"hf-asr-leaderboard",
"no",
"nb",
"nn",
"en",
"dataset:NbAiLab/ncc_speech",
"dataset:NbAiLab/NST",
"dataset:NbAiLab/NPSC",
"arxiv:2212.04356",
"arxiv:1910.09700",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-23T19:31:02Z |
---
license: cc-by-4.0
language:
- 'no'
- nb
- nn
- en
datasets:
- NbAiLab/ncc_speech
- NbAiLab/NST
- NbAiLab/NPSC
tags:
- audio
- asr
- automatic-speech-recognition
- hf-asr-leaderboard
metrics:
- wer
- cer
library_name: transformers
pipeline_tag: automatic-speech-recognition
widget:
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/1/audio/audio.mp3
example_title: FLEURS sample 1
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/4/audio/audio.mp3
example_title: FLEURS sample 2
---
# NB-Whisper Large (beta)
This is a **_public beta_** of the Norwegian NB-Whisper Large model released by the National Library of Norway. NB-Whisper is a series of models for automatic speech recognition (ASR) and speech translation, building upon the foundation laid by [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). All models are trained on 20,000 hours of labeled data.
<center>
<figure>
<video controls>
<source src="https://huggingface.co/NbAiLab/nb-whisper-small-beta/resolve/main/king.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<figcaption><a href="https://www.royalcourt.no/tale.html?tid=137662&sek=28409&scope=27248" target="_blank">Speech given by His Majesty The King of Norway at the garden party hosted by Their Majesties The King and Queen at the Palace Park on 1 September 2016.</a>Transcribed using the Small model.</figcaption>
</figure>
</center>
## Model Details
NB-Whisper models will be available in five different sizes:
| Model Size | Parameters | Availability |
|------------|------------|--------------|
| tiny | 39M | [NB-Whisper Tiny (beta)](https://huggingface.co/NbAiLab/nb-whisper-tiny-beta) |
| base | 74M | [NB-Whisper Base (beta)](https://huggingface.co/NbAiLab/nb-whisper-base-beta) |
| small | 244M | [NB-Whisper Small (beta)](https://huggingface.co/NbAiLab/nb-whisper-small-beta) |
| medium | 769M | [NB-Whisper Medium (beta)](https://huggingface.co/NbAiLab/nb-whisper-medium-beta) |
| large | 1550M | [NB-Whisper Large (beta)](https://huggingface.co/NbAiLab/nb-whisper-large-beta) |
An official release of NB-Whisper models is planned for the Fall 2023.
Please refer to the OpenAI Whisper model card for more details about the backbone model.
### Model Description
- **Developed by:** [NB AI-Lab](https://ai.nb.no/)
- **Shared by:** [NB AI-Lab](https://ai.nb.no/)
- **Model type:** `whisper`
- **Language(s) (NLP):** Norwegian, Norwegian Bokmål, Norwegian Nynorsk, English
- **License:** [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
- **Finetuned from model:** [openai/whisper-small](https://huggingface.co/openai/whisper-small)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/NbAiLab/nb-whisper/
- **Paper:** _Coming soon_
- **Demo:** http://ai.nb.no/demo/nb-whisper
## Uses
### Direct Use
This is a **_public beta_** release. The models published in this repository are intended for a generalist purpose and are available to third parties.
### Downstream Use
For Norwegian transcriptions we are confident that this public beta will give you State-of-the-Art results compared to currently available Norwegian ASR models of the same size. However, it is still known to show some hallucinations, as well as a tendency to drop part of the transcript from time to time. Please also note that the transcripts are typically not word by word. Spoken language and written language are often very different, and the model aims to "translate" spoken utterances into grammatically correct written sentences. We strongly believe that the best way to understand these models is to try them yourself.
A significant part of the training material comes from TV subtitles. Subtitles often shorten the content to make it easier to read. Typically, non-essential parts of the utterance can be also dropped. In some cases, this is a desired ability, in other cases, this is undesired. The final release of these model will provida a mechanism to control for this beaviour.
## Bias, Risks, and Limitations
This is a public beta that is not intended for production. Production use without adequate assessment of risks and mitigation may be considered irresponsible or harmful. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import pipeline
asr = pipeline(
"automatic-speech-recognition",
"NbAiLab/nb-whisper-large-beta"
)
asr(
"audio.mp3",
generate_kwargs={'task': 'transcribe', 'language': 'no'}
)
# {'text': ' Så mange anga kører seg i så viktig sak, så vi får du kører det tilbake med. Om kabaret gudam i at vi skal hjælge. Kør seg vi gjør en uda? Nei noe skal å abelistera sonvorne skrifer. Det er sak, så kjent det bare handling i samtatsen til bargører. Trudet første lask. På den å først så å køre og en gange samme, og så får vi gjør å vorte vorte vorte når vi kjent dit.'}
```
Timestamps can also be retrieved by passing in the right parameter.
```python
asr(
"audio.mp3",
generate_kwargs={'task': 'transcribe', 'language': 'no'},
return_timestamps=True,
)
# {'text': ' at så mange angar til seg så viktig sak, så vi får jo kjølget klare tilbakemeldingen om hva valget dem gjør at vi skal gjøre. Hva skjer vi gjøre nå da? Nei, nå skal jo administrationen vår skrivferdige sak, så kjem til behandling i samfærdshetshøyvalget, tror det første
# r. Først så kan vi ta og henge dem kjemme, og så får vi gjøre vårt valget når vi kommer dit.',
# 'chunks': [{'timestamp': (0.0, 5.34),
# 'text': ' at så mange angar til seg så viktig sak, så vi får jo kjølget klare tilbakemeldingen om'},
# {'timestamp': (5.34, 8.64),
# 'text': ' hva valget dem gjør at vi skal gjøre.'},
# {'timestamp': (8.64, 10.64), 'text': ' Hva skjer vi gjøre nå da?'},
# {'timestamp': (10.64, 17.44),
# 'text': ' Nei, nå skal jo administrationen vår skrivferdige sak, så kjem til behandling i samfærdshetshøyvalget,'},
# {'timestamp': (17.44, 19.44), 'text': ' tror det første år.'},
# {'timestamp': (19.44, 23.94),
# 'text': ' Først så kan vi ta og henge dem kjemme, og så får vi gjøre vårt valget når vi kommer dit.'}]}
```
## Training Data
Trained data comes from Språkbanken and the digital collection at the National Library of Norway. Training data includes:
- NST Norwegian ASR Database (16 kHz), and its corresponding dataset
- Transcribed speeches from the Norwegian Parliament produced by Språkbanken
- TV broadcast (NRK) subtitles (NLN digital collection)
- Audiobooks (NLN digital collection)
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** TPUv4
- **Hours used:** 1,536
- **Cloud Provider:** Google Cloud
- **Compute Region:** `us-central1`
- **Carbon Emitted:** Total emissions are estimated to be 247.77 kgCO₂ of which 100 percents were directly offset by the cloud provider.
#### Software
The model is trained using Jax/Flax. The final model is converted to Pytorch, Tensorflow, whisper.cpp and ONXX. Please tell us if you would like future models to be converted to other format.
## Citation & Contributors
The development of this model was part of the contributors' professional roles at the National Library of Norway, under the _NoSTram_ project led by _Per Egil Kummervold (PEK)_. The Jax code, dataset loaders, and training scripts were collectively designed by _Javier de la Rosa (JdlR)_, _Freddy Wetjen (FW)_, _Rolv-Arild Braaten (RAB)_, and _PEK_. Primary dataset curation was handled by _FW_, _RAB_, and _PEK_, while _JdlR_ and _PEK_ crafted the documentation. The project was completed under the umbrella of AiLab, directed by _Svein Arne Brygfjeld_.
All contributors played a part in shaping the optimal training strategy for the Norwegian ASR model based on the Whisper architecture.
_A paper detailing our process and findings is underway!_
## Acknowledgements
Thanks to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for supporting this project with extensive training resources. Thanks to Google Cloud for supporting us with credits for translating large parts of the corpus. A special thanks to [Sanchit Ghandi](https://huggingface.co/sanchit-gandhi) for providing thorough technical advice in debugging and with the work of getting this to train on Google TPUs. A special thanks to Per Erik Solberg at Språkbanken for the collaboration with regard to the Stortinget corpus.
## Contact
We are releasing this ASR Whisper model as a public beta to gather constructive feedback on its performance. Please do not hesitate to contact us with any experiences, insights, or suggestions that you may have. Your input is invaluable in helping us to improve the model and ensure that it effectively serves the needs of users. Whether you have technical concerns, usability suggestions, or ideas for future enhancements, we welcome your input. Thank you for participating in this critical stage of our model's development.
If you intend to incorporate this model into your research, we kindly request that you reach out to us. We can provide you with the most current status of our upcoming paper, which you can cite to acknowledge and provide context for the work done on this model.
Please use this email as the main contact point, it is read by the entire team: <a rel="noopener nofollow" href="mailto:[email protected]">[email protected]</a>
|
NbAiLab/nb-whisper-medium-beta
|
NbAiLab
| 2023-07-24T18:04:37Z | 110 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"onnx",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"asr",
"hf-asr-leaderboard",
"no",
"nb",
"nn",
"en",
"dataset:NbAiLab/ncc_speech",
"dataset:NbAiLab/NST",
"dataset:NbAiLab/NPSC",
"arxiv:2212.04356",
"arxiv:1910.09700",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-23T19:30:37Z |
---
license: cc-by-4.0
language:
- 'no'
- nb
- nn
- en
datasets:
- NbAiLab/ncc_speech
- NbAiLab/NST
- NbAiLab/NPSC
tags:
- audio
- asr
- automatic-speech-recognition
- hf-asr-leaderboard
metrics:
- wer
- cer
library_name: transformers
pipeline_tag: automatic-speech-recognition
widget:
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/1/audio/audio.mp3
example_title: FLEURS sample 1
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/4/audio/audio.mp3
example_title: FLEURS sample 2
---
# NB-Whisper Medium (beta)
This is a **_public beta_** of the Norwegian NB-Whisper Medium model released by the National Library of Norway. NB-Whisper is a series of models for automatic speech recognition (ASR) and speech translation, building upon the foundation laid by [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). All models are trained on 20,000 hours of labeled data.
<center>
<figure>
<video controls>
<source src="https://huggingface.co/NbAiLab/nb-whisper-small-beta/resolve/main/king.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<figcaption><a href="https://www.royalcourt.no/tale.html?tid=137662&sek=28409&scope=27248" target="_blank">Speech given by His Majesty The King of Norway at the garden party hosted by Their Majesties The King and Queen at the Palace Park on 1 September 2016.</a>Transcribed using the Small model.</figcaption>
</figure>
</center>
## Model Details
NB-Whisper models will be available in five different sizes:
| Model Size | Parameters | Availability |
|------------|------------|--------------|
| tiny | 39M | [NB-Whisper Tiny (beta)](https://huggingface.co/NbAiLab/nb-whisper-tiny-beta) |
| base | 74M | [NB-Whisper Base (beta)](https://huggingface.co/NbAiLab/nb-whisper-base-beta) |
| small | 244M | [NB-Whisper Small (beta)](https://huggingface.co/NbAiLab/nb-whisper-small-beta) |
| medium | 769M | [NB-Whisper Medium (beta)](https://huggingface.co/NbAiLab/nb-whisper-medium-beta) |
| large | 1550M | [NB-Whisper Large (beta)](https://huggingface.co/NbAiLab/nb-whisper-large-beta) |
An official release of NB-Whisper models is planned for the Fall 2023.
Please refer to the OpenAI Whisper model card for more details about the backbone model.
### Model Description
- **Developed by:** [NB AI-Lab](https://ai.nb.no/)
- **Shared by:** [NB AI-Lab](https://ai.nb.no/)
- **Model type:** `whisper`
- **Language(s) (NLP):** Norwegian, Norwegian Bokmål, Norwegian Nynorsk, English
- **License:** [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
- **Finetuned from model:** [openai/whisper-small](https://huggingface.co/openai/whisper-small)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/NbAiLab/nb-whisper/
- **Paper:** _Coming soon_
- **Demo:** http://ai.nb.no/demo/nb-whisper
## Uses
### Direct Use
This is a **_public beta_** release. The models published in this repository are intended for a generalist purpose and are available to third parties.
### Downstream Use
For Norwegian transcriptions we are confident that this public beta will give you State-of-the-Art results compared to currently available Norwegian ASR models of the same size. However, it is still known to show some hallucinations, as well as a tendency to drop part of the transcript from time to time. Please also note that the transcripts are typically not word by word. Spoken language and written language are often very different, and the model aims to "translate" spoken utterances into grammatically correct written sentences. We strongly believe that the best way to understand these models is to try them yourself.
A significant part of the training material comes from TV subtitles. Subtitles often shorten the content to make it easier to read. Typically, non-essential parts of the utterance can be also dropped. In some cases, this is a desired ability, in other cases, this is undesired. The final release of these model will provida a mechanism to control for this beaviour.
## Bias, Risks, and Limitations
This is a public beta that is not intended for production. Production use without adequate assessment of risks and mitigation may be considered irresponsible or harmful. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import pipeline
asr = pipeline(
"automatic-speech-recognition",
"NbAiLab/nb-whisper-medium-beta"
)
asr(
"audio.mp3",
generate_kwargs={'task': 'transcribe', 'language': 'no'}
)
# {'text': ' Så mange anga kører seg i så viktig sak, så vi får du kører det tilbake med. Om kabaret gudam i at vi skal hjælge. Kør seg vi gjør en uda? Nei noe skal å abelistera sonvorne skrifer. Det er sak, så kjent det bare handling i samtatsen til bargører. Trudet første lask. På den å først så å køre og en gange samme, og så får vi gjør å vorte vorte vorte når vi kjent dit.'}
```
Timestamps can also be retrieved by passing in the right parameter.
```python
asr(
"audio.mp3",
generate_kwargs={'task': 'transcribe', 'language': 'no'},
return_timestamps=True,
)
# {'text': ' at så mange angar til seg så viktig sak, så vi får jo kjølget klare tilbakemeldingen om hva valget dem gjør at vi skal gjøre. Hva skjer vi gjøre nå da? Nei, nå skal jo administrationen vår skrivferdige sak, så kjem til behandling i samfærdshetshøyvalget, tror det første
# r. Først så kan vi ta og henge dem kjemme, og så får vi gjøre vårt valget når vi kommer dit.',
# 'chunks': [{'timestamp': (0.0, 5.34),
# 'text': ' at så mange angar til seg så viktig sak, så vi får jo kjølget klare tilbakemeldingen om'},
# {'timestamp': (5.34, 8.64),
# 'text': ' hva valget dem gjør at vi skal gjøre.'},
# {'timestamp': (8.64, 10.64), 'text': ' Hva skjer vi gjøre nå da?'},
# {'timestamp': (10.64, 17.44),
# 'text': ' Nei, nå skal jo administrationen vår skrivferdige sak, så kjem til behandling i samfærdshetshøyvalget,'},
# {'timestamp': (17.44, 19.44), 'text': ' tror det første år.'},
# {'timestamp': (19.44, 23.94),
# 'text': ' Først så kan vi ta og henge dem kjemme, og så får vi gjøre vårt valget når vi kommer dit.'}]}
```
## Training Data
Trained data comes from Språkbanken and the digital collection at the National Library of Norway. Training data includes:
- NST Norwegian ASR Database (16 kHz), and its corresponding dataset
- Transcribed speeches from the Norwegian Parliament produced by Språkbanken
- TV broadcast (NRK) subtitles (NLN digital collection)
- Audiobooks (NLN digital collection)
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** TPUv4
- **Hours used:** 1,536
- **Cloud Provider:** Google Cloud
- **Compute Region:** `us-central1`
- **Carbon Emitted:** Total emissions are estimated to be 247.77 kgCO₂ of which 100 percents were directly offset by the cloud provider.
#### Software
The model is trained using Jax/Flax. The final model is converted to Pytorch, Tensorflow, whisper.cpp and ONXX. Please tell us if you would like future models to be converted to other format.
## Citation & Contributors
The development of this model was part of the contributors' professional roles at the National Library of Norway, under the _NoSTram_ project led by _Per Egil Kummervold (PEK)_. The Jax code, dataset loaders, and training scripts were collectively designed by _Javier de la Rosa (JdlR)_, _Freddy Wetjen (FW)_, _Rolv-Arild Braaten (RAB)_, and _PEK_. Primary dataset curation was handled by _FW_, _RAB_, and _PEK_, while _JdlR_ and _PEK_ crafted the documentation. The project was completed under the umbrella of AiLab, directed by _Svein Arne Brygfjeld_.
All contributors played a part in shaping the optimal training strategy for the Norwegian ASR model based on the Whisper architecture.
_A paper detailing our process and findings is underway!_
## Acknowledgements
Thanks to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for supporting this project with extensive training resources. Thanks to Google Cloud for supporting us with credits for translating large parts of the corpus. A special thanks to [Sanchit Ghandi](https://huggingface.co/sanchit-gandhi) for providing thorough technical advice in debugging and with the work of getting this to train on Google TPUs. A special thanks to Per Erik Solberg at Språkbanken for the collaboration with regard to the Stortinget corpus.
## Contact
We are releasing this ASR Whisper model as a public beta to gather constructive feedback on its performance. Please do not hesitate to contact us with any experiences, insights, or suggestions that you may have. Your input is invaluable in helping us to improve the model and ensure that it effectively serves the needs of users. Whether you have technical concerns, usability suggestions, or ideas for future enhancements, we welcome your input. Thank you for participating in this critical stage of our model's development.
If you intend to incorporate this model into your research, we kindly request that you reach out to us. We can provide you with the most current status of our upcoming paper, which you can cite to acknowledge and provide context for the work done on this model.
Please use this email as the main contact point, it is read by the entire team: <a rel="noopener nofollow" href="mailto:[email protected]">[email protected]</a>
|
jvadlamudi2/swin-tiny-patch4-window7-224-jvadlamudi2
|
jvadlamudi2
| 2023-07-24T17:45:19Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-16T02:44:27Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-jvadlamudi2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-jvadlamudi2
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4467
- Accuracy: 0.8304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 0.4580 | 0.8214 |
| 0.4488 | 2.0 | 14 | 0.4733 | 0.8304 |
| 0.4924 | 3.0 | 21 | 0.4467 | 0.8304 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
jakariamd/opp_115_data_security
|
jakariamd
| 2023-07-24T17:44:56Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-24T17:38:42Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: opp_115_data_security
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opp_115_data_security
This model is a fine-tuned version of [mukund/privbert](https://huggingface.co/mukund/privbert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0891
- Accuracy: 0.9733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 164 | 0.1156 | 0.9687 |
| No log | 2.0 | 328 | 0.0891 | 0.9733 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
augtoma/qCammel-13
|
augtoma
| 2023-07-24T17:39:06Z | 1,633 | 11 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"qCammel-13",
"en",
"arxiv:2305.12031",
"arxiv:2305.14314",
"arxiv:2302.13971",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-24T17:02:29Z |
---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- pytorch
- llama
- llama-2
- qCammel-13
library_name: transformers
---
# qCammel-13
qCammel-13 is a fine-tuned version of Llama-2 13B model, trained on a distilled dataset of 15,000 instructions using QLoRA. This model is optimized for academic medical knowledge and instruction-following capabilities.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept their License before downloading this model .*
The fine-tuning process applied to qCammel-13 involves a distilled dataset of 15,000 instructions and is trained with QLoRA,
**Variations** The original Llama 2 has parameter sizes of 7B, 13B, and 70B. This is the fine-tuned version of the 13B model.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** qCammel-13 is based on the Llama 2 architecture, an auto-regressive language model that uses a decoder only transformer architecture.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved
**Research Papers**
- [Clinical Camel: An Open-Source Expert-Level Medical Language Model with Dialogue-Based Knowledge Encoding](https://arxiv.org/abs/2305.12031)
- [QLoRA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/abs/2305.14314)
- [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
|
MUmairAB/mt5-small-finetuned-en-and-es
|
MUmairAB
| 2023-07-24T17:35:00Z | 6 | 1 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-24T15:55:26Z |
---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_keras_callback
model-index:
- name: MUmairAB/mt5-small-finetuned-en-and-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MUmairAB/mt5-small-finetuned-en-and-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.8111
- Validation Loss: 3.3450
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 12090, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 9.8080 | 4.2830 | 0 |
| 5.9101 | 3.8003 | 1 |
| 5.1453 | 3.6062 | 2 |
| 4.7005 | 3.5052 | 3 |
| 4.4095 | 3.4904 | 4 |
| 4.2204 | 3.3996 | 5 |
| 4.0501 | 3.3842 | 6 |
| 3.9260 | 3.3963 | 7 |
| 3.8527 | 3.3267 | 8 |
| 3.8111 | 3.3450 | 9 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.0
- Tokenizers 0.13.3
|
Za88yes/Irma
|
Za88yes
| 2023-07-24T17:30:53Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-24T17:27:39Z |
---
license: creativeml-openrail-m
---
|
wuru330/378A1_results
|
wuru330
| 2023-07-24T17:25:13Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-24T16:39:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 378A1_results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 378A1_results
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4492
- Accuracy: 0.9014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2401 | 1.0 | 37 | 1.0427 | 0.6582 |
| 0.669 | 2.0 | 74 | 0.5486 | 0.8418 |
| 0.4662 | 3.0 | 111 | 0.4012 | 0.8690 |
| 0.3211 | 4.0 | 148 | 0.5338 | 0.7942 |
| 0.2136 | 5.0 | 185 | 0.3189 | 0.8861 |
| 0.1626 | 6.0 | 222 | 0.4406 | 0.8435 |
| 0.1042 | 7.0 | 259 | 0.3812 | 0.8741 |
| 0.0688 | 8.0 | 296 | 0.3501 | 0.8946 |
| 0.0425 | 9.0 | 333 | 0.3845 | 0.8912 |
| 0.0586 | 10.0 | 370 | 0.3640 | 0.8980 |
| 0.0276 | 11.0 | 407 | 0.3708 | 0.9031 |
| 0.0342 | 12.0 | 444 | 0.3862 | 0.9082 |
| 0.0251 | 13.0 | 481 | 0.5206 | 0.8776 |
| 0.0209 | 14.0 | 518 | 0.4078 | 0.8929 |
| 0.0173 | 15.0 | 555 | 0.4168 | 0.8895 |
| 0.0159 | 16.0 | 592 | 0.4108 | 0.8997 |
| 0.0151 | 17.0 | 629 | 0.4176 | 0.9014 |
| 0.014 | 18.0 | 666 | 0.4228 | 0.9014 |
| 0.0131 | 19.0 | 703 | 0.4266 | 0.9014 |
| 0.0125 | 20.0 | 740 | 0.4301 | 0.9014 |
| 0.012 | 21.0 | 777 | 0.4339 | 0.9014 |
| 0.0115 | 22.0 | 814 | 0.4372 | 0.9014 |
| 0.0111 | 23.0 | 851 | 0.4401 | 0.9014 |
| 0.0107 | 24.0 | 888 | 0.4424 | 0.9014 |
| 0.0101 | 25.0 | 925 | 0.4444 | 0.9014 |
| 0.01 | 26.0 | 962 | 0.4461 | 0.9014 |
| 0.01 | 27.0 | 999 | 0.4475 | 0.9014 |
| 0.0099 | 28.0 | 1036 | 0.4485 | 0.9014 |
| 0.0097 | 29.0 | 1073 | 0.4490 | 0.9014 |
| 0.0097 | 30.0 | 1110 | 0.4492 | 0.9014 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jakariamd/opp_115_first_user_access
|
jakariamd
| 2023-07-24T17:25:10Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-24T17:18:47Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: opp_115_first_user_access
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opp_115_first_user_access
This model is a fine-tuned version of [mukund/privbert](https://huggingface.co/mukund/privbert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0559
- Accuracy: 0.9872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 167 | 0.0697 | 0.9857 |
| No log | 2.0 | 334 | 0.0559 | 0.9872 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
S1X3L4/a2c-AntBulletEnv-v0
|
S1X3L4
| 2023-07-24T17:16:04Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-24T17:15:04Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 966.84 +/- 102.30
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ALM-AHME/beit-large-patch16-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20-Shuffled
|
ALM-AHME
| 2023-07-24T17:13:49Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"beit",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/beit-large-patch16-224",
"base_model:finetune:microsoft/beit-large-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-23T22:40:44Z |
---
license: apache-2.0
base_model: microsoft/beit-large-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: beit-large-patch16-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20-Shuffled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-large-patch16-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20-Shuffled
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0146
- Accuracy: 0.9958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.9
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 0.5847 | 1.0 | 199 | 0.8030 | 0.4640 |
| 0.2856 | 2.0 | 398 | 0.9354 | 0.1753 |
| 0.156 | 3.0 | 597 | 0.9552 | 0.1179 |
| 0.1049 | 4.0 | 796 | 0.9585 | 0.1043 |
| 0.1399 | 5.0 | 995 | 0.9760 | 0.0673 |
| 0.0423 | 6.0 | 1194 | 0.9802 | 0.0455 |
| 0.078 | 7.0 | 1393 | 0.9802 | 0.0554 |
| 0.1769 | 8.0 | 1592 | 0.9764 | 0.0556 |
| 0.0568 | 9.0 | 1791 | 0.9807 | 0.0569 |
| 0.0728 | 10.0 | 1990 | 0.9915 | 0.0234 |
| 0.0229 | 11.0 | 2189 | 0.9910 | 0.0240 |
| 0.0561 | 12.0 | 2388 | 0.9901 | 0.0352 |
| 0.014 | 13.0 | 2587 | 0.9797 | 0.0749 |
| 0.096 | 14.0 | 2786 | 0.9934 | 0.0268 |
| 0.0005 | 15.0 | 2985 | 0.0146 | 0.9958 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jakariamd/opp_115_third_party_sharing
|
jakariamd
| 2023-07-24T16:58:27Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-24T16:51:24Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: opp_115_third_party_sharing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opp_115_third_party_sharing
This model is a fine-tuned version of [mukund/privbert](https://huggingface.co/mukund/privbert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1756
- Accuracy: 0.9627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 141 | 0.2359 | 0.9574 |
| No log | 2.0 | 282 | 0.1756 | 0.9627 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
ericNguyen0132/roberta-large-Dep-v1
|
ericNguyen0132
| 2023-07-24T16:56:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:rafalposwiata/deproberta-large-depression",
"base_model:finetune:rafalposwiata/deproberta-large-depression",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-24T03:00:14Z |
---
base_model: rafalposwiata/deproberta-large-depression
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-large-Dep-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-Dep-v1
This model is a fine-tuned version of [rafalposwiata/deproberta-large-depression](https://huggingface.co/rafalposwiata/deproberta-large-depression) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5267
- Accuracy: 0.855
- F1: 0.9138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 469 | 0.3785 | 0.855 | 0.9150 |
| 0.3765 | 2.0 | 938 | 0.3664 | 0.8767 | 0.9287 |
| 0.3178 | 3.0 | 1407 | 0.5267 | 0.855 | 0.9138 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
HilbertS/a2c-PandaReachDense-v2
|
HilbertS
| 2023-07-24T16:50:39Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-24T16:37:50Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -3.19 +/- 1.68
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ALM-AHME/convnextv2-large-1k-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20-Shuffled
|
ALM-AHME
| 2023-07-24T16:31:45Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"convnextv2",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnextv2-large-1k-224",
"base_model:finetune:facebook/convnextv2-large-1k-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-24T10:16:13Z |
---
license: apache-2.0
base_model: facebook/convnextv2-large-1k-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: convnextv2-large-1k-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20-Shuffled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-large-1k-224-finetuned-BreastCancer-Classification-BreakHis-AH-60-20-20-Shuffled
This model is a fine-tuned version of [facebook/convnextv2-large-1k-224](https://huggingface.co/facebook/convnextv2-large-1k-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0398
- Accuracy: 0.9882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.9
- num_epochs: 14
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 0.5059 | 1.0 | 199 | 0.9001 | 0.4826 |
| 0.2533 | 2.0 | 398 | 0.9515 | 0.2124 |
| 0.2358 | 3.0 | 597 | 0.9538 | 0.1543 |
| 0.2584 | 4.0 | 796 | 0.9642 | 0.1136 |
| 0.1085 | 5.0 | 995 | 0.9746 | 0.0891 |
| 0.1007 | 6.0 | 1194 | 0.9769 | 0.0725 |
| 0.1463 | 7.0 | 1393 | 0.9840 | 0.0541 |
| 0.3564 | 8.0 | 1592 | 0.9802 | 0.0880 |
| 0.0957 | 9.0 | 1791 | 0.9656 | 0.1375 |
| 0.1481 | 10.0 | 1990 | 0.0511 | 0.9873 |
| 0.1536 | 11.0 | 2189 | 0.0827 | 0.9713 |
| 0.0458 | 12.0 | 2388 | 0.0398 | 0.9882 |
| 0.4956 | 13.0 | 2587 | 0.3474 | 0.8643 |
| 0.0801 | 14.0 | 2786 | 0.0850 | 0.9797 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
EllaHong/km5.8_qlora_4b_exp7
|
EllaHong
| 2023-07-24T16:30:24Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-24T16:30:15Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
zacdennis/ppo-LunarLander-v2
|
zacdennis
| 2023-07-24T16:30:10Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-24T16:29:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 240.27 +/- 56.67
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NasimB/cl-log-rarity-220k
|
NasimB
| 2023-07-24T16:00:41Z | 153 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-24T13:26:50Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cl-log-rarity-220k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cl-log-rarity-220k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.7188 | 0.07 | 500 | 5.5918 |
| 4.5196 | 0.13 | 1000 | 5.2478 |
| 4.2676 | 0.2 | 1500 | 5.0782 |
| 4.1109 | 0.26 | 2000 | 4.9843 |
| 3.9983 | 0.33 | 2500 | 4.9048 |
| 3.8933 | 0.4 | 3000 | 4.8693 |
| 3.8127 | 0.46 | 3500 | 4.8279 |
| 3.7277 | 0.53 | 4000 | 4.8011 |
| 3.6491 | 0.59 | 4500 | 4.7678 |
| 3.5775 | 0.66 | 5000 | 4.7367 |
| 3.5032 | 0.73 | 5500 | 4.7246 |
| 3.4444 | 0.79 | 6000 | 4.7037 |
| 3.39 | 0.86 | 6500 | 4.6959 |
| 3.3637 | 0.92 | 7000 | 4.6939 |
| 3.3558 | 0.99 | 7500 | 4.6909 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
MattStammers/flyingHuggy
|
MattStammers
| 2023-07-24T15:59:02Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-24T15:57:03Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MattStammers/flyingHuggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
onionLad/medlid
|
onionLad
| 2023-07-24T15:56:44Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-21T15:59:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: medlid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medlid
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2976
- Precision: 0.4183
- Recall: 0.5546
- F1: 0.4769
- Accuracy: 0.9343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 19 | 0.2900 | 0.3809 | 0.5710 | 0.4569 | 0.9285 |
| No log | 2.0 | 38 | 0.2976 | 0.4183 | 0.5546 | 0.4769 | 0.9343 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.11.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
VFiona/opus-mt-en-it-finetuned_50000-en-to-it
|
VFiona
| 2023-07-24T15:52:31Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-24T12:41:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-it-finetuned_50000-en-to-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-it-finetuned_50000-en-to-it
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-it](https://huggingface.co/Helsinki-NLP/opus-mt-en-it) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2137
- Bleu: 77.2246
- Gen Len: 28.6587
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.2448 | 1.0 | 2674 | 0.2137 | 77.2246 | 28.6587 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.11.0
|
Vrspi/KAY
|
Vrspi
| 2023-07-24T15:52:07Z | 0 | 0 |
keras
|
[
"keras",
"text-generation-inference",
"text-generation",
"en",
"region:us"
] |
text-generation
| 2023-05-02T19:58:10Z |
---
language:
- en
library_name: keras
pipeline_tag: text-generation
tags:
- text-generation-inference
---
|
HilbertS/a2c-AntBulletEnv-v0
|
HilbertS
| 2023-07-24T15:49:04Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-05T15:13:22Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1210.21 +/- 147.67
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
d-karpone/distilhubert-finetuned-gtzan
|
d-karpone
| 2023-07-24T15:45:50Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-24T14:41:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.87
- Loss: 0.6300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 1.8192 | 1.0 | 150 | 0.41 | 1.7706 |
| 1.1745 | 2.0 | 300 | 0.64 | 1.2586 |
| 0.7885 | 3.0 | 450 | 0.76 | 0.8419 |
| 0.661 | 4.0 | 600 | 0.81 | 0.7019 |
| 0.3334 | 5.0 | 750 | 0.84 | 0.5766 |
| 0.3265 | 6.0 | 900 | 0.83 | 0.5862 |
| 0.0928 | 7.0 | 1050 | 0.88 | 0.5613 |
| 0.0698 | 8.0 | 1200 | 0.86 | 0.6432 |
| 0.0562 | 9.0 | 1350 | 0.87 | 0.6141 |
| 0.0268 | 10.0 | 1500 | 0.87 | 0.6300 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
DopeorNope/IA3-ko-sheep-5.8b
|
DopeorNope
| 2023-07-24T15:37:55Z | 0 | 0 |
peft
|
[
"peft",
"gpt_neox",
"region:us"
] | null | 2023-07-21T16:52:34Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
jvilaseca/rl_course_vizdoom_health_gathering_supreme2
|
jvilaseca
| 2023-07-24T15:36:33Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-24T15:36:26Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.55 +/- 4.03
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r jvilaseca/rl_course_vizdoom_health_gathering_supreme2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
rajat/deidentify_2
|
rajat
| 2023-07-24T15:32:57Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-24T15:32:56Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
Technotech/MagicPrompt-tinystories-33M-epoch2
|
Technotech
| 2023-07-24T15:29:02Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"en",
"dataset:Gustavosta/Stable-Diffusion-Prompts",
"license:apache-2.0",
"region:us"
] | null | 2023-07-24T14:24:36Z |
---
library_name: peft
license: apache-2.0
datasets:
- Gustavosta/Stable-Diffusion-Prompts
language:
- en
widget:
- text: "A picture of"
---
## Training config
Rank 8 LoRA, 2 epochs of Stable-Diffusion-Prompts with batch size 128.
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
MattStammers/stickfetcher
|
MattStammers
| 2023-07-24T15:08:23Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-24T15:00:44Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MattStammers/stickfetcher
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
zodrak/ppo-Huggy
|
zodrak
| 2023-07-24T14:59:45Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-24T14:59:40Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: zodrak/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AnnaMats/Taxi-v3
|
AnnaMats
| 2023-07-24T14:40:14Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-24T14:40:12Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="AnnaMats/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
SinghShweta/llama2-qlora-finetunined-codegeneration_py
|
SinghShweta
| 2023-07-24T14:27:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-24T14:27:30Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
erikgrip2/mt5-finetuned-for-motion-title
|
erikgrip2
| 2023-07-24T14:22:33Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2010.11934",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-26T19:41:46Z |
---
widget:
- text: "Inom en kommun får offentlig plats användas för det ändamål den är avsedd för. Om den ska användas för annat ändamål än avsett ska tillstånds sökas. Att ansöka om tillstånd ska till exempel ske om man avser att placera ut containrar, uteserveringar, byggnadsställningar, byggsäckar (big bags) eller använder marken för systematisk utplacering av elsparkcyklar."
- text: "Det finns ett samband mellan våld i nära relationer och våld mot husdjur. Våld mot husdjur används ofta som en del av våldet mot den våldsutsatta partnern. En sammanfattning av sambandet är att när djur är utsatta för våld är människor i riskzonen och när människor är utsatta för våld är djur i riskzonen. Att lämna en våldsam relation med husdjur kan vara svårt. När den våldsutsatta behöver ta sig till skyddat boende kan djuret ofta inte följa med eftersom de flesta skyddade boenden i Sverige saknar möjlighet för våldsutsatta att kunna ta med sina husdjur. Djurens låga status gör att socialtjänsten oftast inte beviljar stöd för kostnader för djuret. Det innebär att den som är utsatt ofta inte får hjälp med att exempelvis placera hunden på ett hundpensionat och valet står då mellan att lämna kvar djuret och fly själv eller att stanna kvar och många väljer att stanna kvar i relationen för att skydda djuret. Separationsvåldet är ofta det dödligaste våldet – när den våldsutsatta lämnar relationen är hotbilden mot hen som störst. Om djuret finns kvar i hemmet i det läget är risken för våld och även dödligt våld mot djuret hög. Det behövs specialiserade utbildningsinsatser för alla former av skyddade boenden, så att vikten av tryggheten för den våldsutsattas djur tas på allvar. Utöver det behövs även stöd för tillfälliga trygga placeringar av de våldsutsattas husdjur, så att personer som flyttar till vård- och omsorgsboende ska kunna ha kvar sina sällskapsdjur."
- text: "Handeln med hotade arter är världens fjärde största illegala handel, och omsätter ofantliga summor som göder kriminella nätverk. Forskare har länge varnat om att allt fler virus riskerar att spridas från djur till människor. Utbrottet av Coronaviruset Covid-19 har återigen satt strålkastarljuset på kopplingen mellan djurmarknader och sjukdomar. En anledning till att risken för virusspridning ökar på marknaderna är att det är ohygieniska förhållanden med en stor blandning av olika djurarter, ofta stressade – som aldrig skulle träffas naturligt i det vilda. Olika virus kan då hoppa från en djurart till en annan och sedan vidare till människan. På den internationella marknaden säljs varje år flera miljoner vildfångade fåglar, tiotusentals apor och ett oräkneligt antal andra djur och växter år på den internationella marknaden. Djuren säljs som exotiska husdjur, eller så används deras pälsar och skin inom klädindustrin. Delar från skelett, horn och betar används som ingredienser i naturmediciner och mat, till smycken och prydnadsföremål. Den stora efterfrågan på elfenben bidrar till allt för stor jakt på världens elefanter. Även världens noshörningsarter hotas av tjuvjakt och illegal handel. År 1973 kom ett antal länder överens om att handel inte får vara orsaken till att vilda djur och växter utrotas. Överenskommelsen heter CITES (Convention on International Trade in Endangered Species of wild flora and fauna). CITES-konventionen innebär att inga arter från den vilda faunan eller floran blir föremål för en ohållbar exploatering på grund av internationell handel. Och 180 länder har skrivit under konventionen."
inference:
parameters:
early_stopping: True
length_penalty: 1.0
max_length: 64
num_beams: 2
repetition_penalty: 2.5
license: apache-2.0
pipeline_tag: text2text-generation
---
A fine-tuned model for generating parliament motion titles in Swedish, based on the [google/mt5-small](https://huggingface.co/google/mt5-small) version of [Google's mT5](https://github.com/google-research/multilingual-t5)
The model was fine tuned with a dataset of 145k motions from the Swedish Parliament, dating from 1970-01-12 to 2022-11-23. For more info, see [github repo](https://github.com/erikgrip/swedish_parliament_motion_summarization)
**About Googles mt5 model:**
mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu.
**Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5)
Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934)
Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel*
## Abstract
The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available.
|
patebel/ppo-Huggy
|
patebel
| 2023-07-24T14:20:22Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-24T14:20:11Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: patebel/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
fawadkhan/alpaca-13b-finetune
|
fawadkhan
| 2023-07-24T14:18:25Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-24T14:18:24Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
osmancanyuca/q-FrozenLake-v1-4x4-slippery
|
osmancanyuca
| 2023-07-24T14:14:59Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-24T14:14:55Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.74 +/- 0.44
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="osmancanyuca/q-FrozenLake-v1-4x4-slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
thomaseding/pixelnet
|
thomaseding
| 2023-07-24T14:13:05Z | 0 | 12 | null |
[
"region:us"
] | null | 2023-07-03T18:35:38Z |
https://huggingface.co/thomaseding/pixelnet
--- license: creativeml-openrail-m ---
# PixelNet (Thomas Eding)
### About:
PixelNet is a ControlNet model for Stable Diffusion.
It takes a checkerboard image as input, which is used to control where logical pixels are to be placed.
This is currently an experimental proof of concept. I trained this using on around 2000 pixel-art/pixelated images that I generated using Stable Diffusion (with a lot of cleanup and manual curation). The model is not very good, but it does work on grid sizes of about a max of 64 checker "pixels" when the smallest dimension is 512. I can successfully get the model to understand 128x128 checkerboards for image generations of at least 1024x1024 pixels.
The model works best with the "Balanced" ControlNet setting. Try using a "Control Weight" of 1 or a little higher.
"ControlNet Is More Important" seems to require a heavy "Control Weight" setting to have an effect. Try using a "Control Weight" of 2.
A low "Control Weight" setting seems to produce images that resemble smooth paintings or vector art.
Smaller checker grids tend to perform worse (e.g. 5x5 vs a 32x32)
Too low or too high of a "Steps" value breaks the model. Try something like 15-30, depending on an assortment of factors. Feel free to experiment with the built-in A1111 "X/Y/Z Plot" script.
### Usage:
To install, copy the `.safetensors` and `.yaml` files to your Automatic1111 ControlNet extension's model directory (e.g. `stable-diffusion-webui/extensions/sd-webui-controlnet/models`). Completely restart the Automatic1111 server after doing this and then refresh the web page.
There is no preprocessor. Instead, supply a black and white checkerboard image as the control input. Various control image grids can be found in this repository's `grids` directory. (https://huggingface.co/thomaseding/pixelnet/resolve/main/grids/grids.zip)
The script `gen_checker.py` can be used to generate checkerboard images of arbitrary sizes. (https://huggingface.co/thomaseding/pixelnet/blob/main/gen_checker.py) Example: `python gen_checker.py --upscale-dims 512x512 --dims 70x70 --output-file control.png` to generate a 70x70 checkerboard image upscaled to 512x512 pixels.
The script `controlled_downscale.py` is a custom downscaler made specifically for this model. You provide both the generated image and the control image used to generate it. It will downscale according to the control grid. (https://huggingface.co/thomaseding/pixelnet/blob/main/controlled_downscale.py) Example: `python controlled_downscale.py --control diffusion_control.png --input diffusion_output.png --output-downscaled downscaled.png --output-quantized quantized.png --trim-cropped-edges false --sample-radius 2`. See `--help` for more info.
### FAQ:
Q: Are there any "Trigger Words" for this model?
A: Not really. I removed all words pertaining to style for my training data. This includes words like "pixel", "high quality", etc. In fact adding "pixel art" to the prompt seems to make the model perform worse (in my experience). One word I do find useful is to add "garish" to the negative prompt when the output coloring is hyper.
Q: Png or Jpeg?
A: Use Png. Jpeg's compression algorithm is terrible for pixel art.
Q: Why is this needed? Can't I use a post-processor to downscale the image?
Q: Is there special A1111 user-interface integration?
A: Yes... but not yet merged into the standard ControlNet extension's code. See (https://civitai.com/posts/371477) if you want to integrate the changes yourself in the meantime.
A: From my experience SD has a hard time creating genuine pixel art (even with dedicated base models and loras), where it has a mismatch of logical pixel sizes, smooth curves, etc. What appears to be a straight line at a glance, might bend around. This can cause post-processors to create artifacts based on quantization rounding a pixel to a position one pixel off in some direction. This model is intended to help fix that.
Q: Should I use this model with a post-processor?
A: Yes, I still recommend you do post-processing to clean up the image. This model is not perfect and will still have artifacts. Note that none of the sample output images are post-processed; they are raw outputs from the model. Consider sampling the image based on the location of the control grid checker faces. The provided `controlled_downscale.py` script can do this for you. You can take the output of this script (presumably the `--output-downscaled` file) and then run it through a different post-processor (e.g. to refine the color palette). I only tested the script for a few generated images, so it might still be a bit buggy in the way it computes the sample locations. So for now, compare the output of the script. You may find that supplying an alternative control grid image may be beneficial, or may find that using some alternative post-processing method may be better.
Q: Does the model support non-square grids?
A: Kind of. I trained it with some non-perfect square grids (when pre-upscaled checkerboards are not a factor of the upscaled image size), so in that sense it should work fine. I also trained it with some checkerboard images with genuine non-square rectangular faces (e.g. double-wide pixels).
Q: Will there be a better trained model of this in the future?
A: I hope so. I will need to curate a much larger and higher-quality dataset, which might take me a long time. Regardless, I plan on making the control effect more faithful to the control image. I may decide to try to generalize this beyond rectangular grids, but that is not a priority. I think including non-square rectangular faces in some of the training data was perhaps harmful to the model's performance. Likewise for grids smaller than 8x8. Perhaps it is better to train separate models for very small grids (but at that point, you might as well make the images by hand) and for non-square rectangular grids.
Q: What about color quantization?
A: Coming soon, "PaletteNet".
### Sample Outputs:



|
alohahawaii/llama2-qlora-finetunined-french
|
alohahawaii
| 2023-07-24T14:08:51Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T07:10:08Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
sheLpersss7/Asahina-Mafuyu1014
|
sheLpersss7
| 2023-07-24T14:06:46Z | 0 | 0 |
asteroid
|
[
"asteroid",
"music",
"voice-activity-detection",
"ja",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"arxiv:1910.09700",
"region:us"
] |
voice-activity-detection
| 2023-07-24T13:52:24Z |
---
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
language:
- ja
metrics:
- character
library_name: asteroid
pipeline_tag: voice-activity-detection
tags:
- music
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PraveenJesu/openai-whisper-medium-murf-audio-augment-v1.0
|
PraveenJesu
| 2023-07-24T13:56:58Z | 1 | 0 |
peft
|
[
"peft",
"pytorch",
"region:us"
] | null | 2023-07-18T15:10:30Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
Adi0010/a2c-AntBulletEnv-v0
|
Adi0010
| 2023-07-24T13:28:34Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-24T13:27:26Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1413.42 +/- 151.86
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
malanevans/PPO-LunarLander-v2_v2
|
malanevans
| 2023-07-24T13:19:44Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-24T13:19:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.82 +/- 11.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NasimB/cl-length-260k
|
NasimB
| 2023-07-24T13:17:29Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-24T10:24:45Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cl-length-260k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cl-length-260k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.8312 | 0.06 | 500 | 5.5636 |
| 4.6153 | 0.11 | 1000 | 5.2363 |
| 4.354 | 0.17 | 1500 | 5.0666 |
| 4.174 | 0.23 | 2000 | 4.9628 |
| 4.05 | 0.28 | 2500 | 4.9068 |
| 3.9481 | 0.34 | 3000 | 4.8695 |
| 3.8506 | 0.39 | 3500 | 4.8265 |
| 3.7636 | 0.45 | 4000 | 4.8014 |
| 3.6826 | 0.51 | 4500 | 4.7895 |
| 3.5968 | 0.56 | 5000 | 4.7650 |
| 3.5137 | 0.62 | 5500 | 4.7536 |
| 3.4435 | 0.68 | 6000 | 4.7414 |
| 3.3713 | 0.73 | 6500 | 4.7371 |
| 3.3116 | 0.79 | 7000 | 4.7364 |
| 3.2597 | 0.84 | 7500 | 4.7324 |
| 3.2291 | 0.9 | 8000 | 4.7300 |
| 3.2165 | 0.96 | 8500 | 4.7260 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Hawk91/ddpm-celebahq-finetuned-anime-5epochs
|
Hawk91
| 2023-07-24T13:01:26Z | 33 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-07-24T12:59:15Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This is a celebahq sd model fine-tuned with anime faces dataset.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Hawk91/ddpm-celebahq-finetuned-anime-5epochs')
image = pipeline().images[0]
image
```
|
jordyvl/vit-base_rvl_tobacco_test_entropy_large
|
jordyvl
| 2023-07-24T12:57:09Z | 166 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-24T11:34:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_rvl_tobacco_test_entropy_large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl_tobacco_test_entropy_large
This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4296
- Accuracy: 0.9
- Brier Loss: 0.1643
- Nll: 1.3751
- F1 Micro: 0.9
- F1 Macro: 0.9013
- Ece: 0.1074
- Aurc: 0.0220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 12 | 2.5324 | 0.05 | 0.9010 | 18.0477 | 0.0500 | 0.0539 | 0.1568 | 0.9609 |
| No log | 2.0 | 25 | 2.4440 | 0.23 | 0.8823 | 11.6657 | 0.23 | 0.1501 | 0.2626 | 0.7837 |
| No log | 2.96 | 37 | 2.3026 | 0.435 | 0.8481 | 4.6649 | 0.435 | 0.2723 | 0.4061 | 0.2744 |
| No log | 4.0 | 50 | 2.0897 | 0.71 | 0.7862 | 3.1934 | 0.7100 | 0.5915 | 0.5791 | 0.0865 |
| No log | 4.96 | 62 | 1.8603 | 0.785 | 0.7065 | 1.8079 | 0.785 | 0.7047 | 0.6067 | 0.0508 |
| No log | 6.0 | 75 | 1.6025 | 0.845 | 0.6037 | 1.3384 | 0.845 | 0.7944 | 0.5657 | 0.0393 |
| No log | 6.96 | 87 | 1.3819 | 0.885 | 0.5101 | 1.1034 | 0.885 | 0.8660 | 0.5288 | 0.0355 |
| No log | 8.0 | 100 | 1.1701 | 0.905 | 0.4187 | 0.9002 | 0.905 | 0.8958 | 0.4765 | 0.0302 |
| No log | 8.96 | 112 | 1.0024 | 0.9 | 0.3462 | 0.8889 | 0.9 | 0.8913 | 0.3961 | 0.0259 |
| No log | 10.0 | 125 | 0.8487 | 0.91 | 0.2796 | 1.0389 | 0.91 | 0.9067 | 0.3464 | 0.0225 |
| No log | 10.96 | 137 | 0.7522 | 0.91 | 0.2395 | 1.0408 | 0.91 | 0.9090 | 0.2930 | 0.0225 |
| No log | 12.0 | 150 | 0.6862 | 0.905 | 0.2163 | 1.2027 | 0.905 | 0.9094 | 0.2529 | 0.0250 |
| No log | 12.96 | 162 | 0.6437 | 0.895 | 0.2035 | 1.0504 | 0.895 | 0.8998 | 0.2306 | 0.0230 |
| No log | 14.0 | 175 | 0.5896 | 0.905 | 0.1851 | 0.8758 | 0.905 | 0.9040 | 0.2122 | 0.0201 |
| No log | 14.96 | 187 | 0.5167 | 0.92 | 0.1528 | 0.7812 | 0.92 | 0.9119 | 0.1872 | 0.0143 |
| No log | 16.0 | 200 | 0.4894 | 0.935 | 0.1450 | 0.9154 | 0.935 | 0.9314 | 0.1962 | 0.0186 |
| No log | 16.96 | 212 | 0.4769 | 0.91 | 0.1451 | 1.1071 | 0.91 | 0.9109 | 0.1703 | 0.0196 |
| No log | 18.0 | 225 | 0.4573 | 0.91 | 0.1399 | 1.0919 | 0.91 | 0.9084 | 0.1621 | 0.0182 |
| No log | 18.96 | 237 | 0.4501 | 0.905 | 0.1409 | 0.9304 | 0.905 | 0.9056 | 0.1549 | 0.0177 |
| No log | 20.0 | 250 | 0.4434 | 0.905 | 0.1393 | 1.2519 | 0.905 | 0.9056 | 0.1552 | 0.0180 |
| No log | 20.96 | 262 | 0.4391 | 0.905 | 0.1403 | 1.2476 | 0.905 | 0.9056 | 0.1434 | 0.0178 |
| No log | 22.0 | 275 | 0.4326 | 0.905 | 0.1401 | 1.2419 | 0.905 | 0.9056 | 0.1396 | 0.0180 |
| No log | 22.96 | 287 | 0.4290 | 0.905 | 0.1407 | 1.2397 | 0.905 | 0.9051 | 0.1414 | 0.0181 |
| No log | 24.0 | 300 | 0.4255 | 0.905 | 0.1408 | 1.2373 | 0.905 | 0.9051 | 0.1346 | 0.0182 |
| No log | 24.96 | 312 | 0.4238 | 0.905 | 0.1418 | 1.2372 | 0.905 | 0.9051 | 0.1308 | 0.0183 |
| No log | 26.0 | 325 | 0.4212 | 0.905 | 0.1423 | 1.2348 | 0.905 | 0.9051 | 0.1288 | 0.0184 |
| No log | 26.96 | 337 | 0.4197 | 0.905 | 0.1429 | 1.2350 | 0.905 | 0.9051 | 0.1242 | 0.0187 |
| No log | 28.0 | 350 | 0.4181 | 0.905 | 0.1436 | 1.2331 | 0.905 | 0.9051 | 0.1298 | 0.0187 |
| No log | 28.96 | 362 | 0.4171 | 0.905 | 0.1443 | 1.2339 | 0.905 | 0.9051 | 0.1341 | 0.0188 |
| No log | 30.0 | 375 | 0.4158 | 0.905 | 0.1449 | 1.2322 | 0.905 | 0.9051 | 0.1261 | 0.0190 |
| No log | 30.96 | 387 | 0.4156 | 0.905 | 0.1458 | 1.2337 | 0.905 | 0.9051 | 0.1310 | 0.0190 |
| No log | 32.0 | 400 | 0.4145 | 0.905 | 0.1463 | 1.2323 | 0.905 | 0.9051 | 0.1244 | 0.0192 |
| No log | 32.96 | 412 | 0.4145 | 0.905 | 0.1472 | 1.2342 | 0.905 | 0.9051 | 0.1175 | 0.0193 |
| No log | 34.0 | 425 | 0.4140 | 0.905 | 0.1477 | 1.2353 | 0.905 | 0.9051 | 0.1163 | 0.0194 |
| No log | 34.96 | 437 | 0.4138 | 0.905 | 0.1485 | 1.2384 | 0.905 | 0.9051 | 0.1296 | 0.0195 |
| No log | 36.0 | 450 | 0.4137 | 0.905 | 0.1491 | 1.3855 | 0.905 | 0.9051 | 0.1271 | 0.0195 |
| No log | 36.96 | 462 | 0.4134 | 0.905 | 0.1497 | 1.3846 | 0.905 | 0.9051 | 0.1264 | 0.0196 |
| No log | 38.0 | 475 | 0.4137 | 0.905 | 0.1504 | 1.3842 | 0.905 | 0.9051 | 0.1254 | 0.0196 |
| No log | 38.96 | 487 | 0.4136 | 0.91 | 0.1509 | 1.3835 | 0.91 | 0.9109 | 0.1160 | 0.0196 |
| 0.5543 | 40.0 | 500 | 0.4140 | 0.91 | 0.1515 | 1.3831 | 0.91 | 0.9109 | 0.1190 | 0.0198 |
| 0.5543 | 40.96 | 512 | 0.4138 | 0.91 | 0.1519 | 1.3825 | 0.91 | 0.9109 | 0.1186 | 0.0198 |
| 0.5543 | 42.0 | 525 | 0.4143 | 0.91 | 0.1526 | 1.3822 | 0.91 | 0.9109 | 0.1180 | 0.0198 |
| 0.5543 | 42.96 | 537 | 0.4143 | 0.91 | 0.1530 | 1.3816 | 0.91 | 0.9109 | 0.1220 | 0.0199 |
| 0.5543 | 44.0 | 550 | 0.4148 | 0.91 | 0.1536 | 1.3815 | 0.91 | 0.9109 | 0.1214 | 0.0200 |
| 0.5543 | 44.96 | 562 | 0.4149 | 0.91 | 0.1539 | 1.3809 | 0.91 | 0.9109 | 0.1208 | 0.0200 |
| 0.5543 | 46.0 | 575 | 0.4154 | 0.91 | 0.1545 | 1.3807 | 0.91 | 0.9109 | 0.1173 | 0.0200 |
| 0.5543 | 46.96 | 587 | 0.4157 | 0.91 | 0.1549 | 1.3803 | 0.91 | 0.9109 | 0.1084 | 0.0202 |
| 0.5543 | 48.0 | 600 | 0.4162 | 0.91 | 0.1554 | 1.3801 | 0.91 | 0.9109 | 0.1080 | 0.0202 |
| 0.5543 | 48.96 | 612 | 0.4163 | 0.91 | 0.1557 | 1.3798 | 0.91 | 0.9109 | 0.1068 | 0.0202 |
| 0.5543 | 50.0 | 625 | 0.4169 | 0.91 | 0.1562 | 1.3795 | 0.91 | 0.9109 | 0.1066 | 0.0203 |
| 0.5543 | 50.96 | 637 | 0.4171 | 0.91 | 0.1565 | 1.3793 | 0.91 | 0.9109 | 0.1064 | 0.0203 |
| 0.5543 | 52.0 | 650 | 0.4177 | 0.91 | 0.1570 | 1.3791 | 0.91 | 0.9109 | 0.1120 | 0.0203 |
| 0.5543 | 52.96 | 662 | 0.4180 | 0.91 | 0.1573 | 1.3789 | 0.91 | 0.9109 | 0.1117 | 0.0203 |
| 0.5543 | 54.0 | 675 | 0.4185 | 0.91 | 0.1577 | 1.3786 | 0.91 | 0.9109 | 0.1065 | 0.0204 |
| 0.5543 | 54.96 | 687 | 0.4187 | 0.91 | 0.1579 | 1.3785 | 0.91 | 0.9109 | 0.1063 | 0.0204 |
| 0.5543 | 56.0 | 700 | 0.4193 | 0.91 | 0.1584 | 1.3782 | 0.91 | 0.9109 | 0.1062 | 0.0204 |
| 0.5543 | 56.96 | 712 | 0.4196 | 0.91 | 0.1586 | 1.3782 | 0.91 | 0.9109 | 0.1058 | 0.0206 |
| 0.5543 | 58.0 | 725 | 0.4200 | 0.91 | 0.1590 | 1.3779 | 0.91 | 0.9109 | 0.1060 | 0.0206 |
| 0.5543 | 58.96 | 737 | 0.4203 | 0.91 | 0.1592 | 1.3778 | 0.91 | 0.9109 | 0.1095 | 0.0207 |
| 0.5543 | 60.0 | 750 | 0.4209 | 0.91 | 0.1596 | 1.3776 | 0.91 | 0.9109 | 0.1055 | 0.0209 |
| 0.5543 | 60.96 | 762 | 0.4213 | 0.91 | 0.1598 | 1.3776 | 0.91 | 0.9109 | 0.1091 | 0.0209 |
| 0.5543 | 62.0 | 775 | 0.4216 | 0.91 | 0.1601 | 1.3772 | 0.91 | 0.9109 | 0.1019 | 0.0210 |
| 0.5543 | 62.96 | 787 | 0.4221 | 0.91 | 0.1603 | 1.3773 | 0.91 | 0.9109 | 0.1017 | 0.0211 |
| 0.5543 | 64.0 | 800 | 0.4225 | 0.905 | 0.1606 | 1.3769 | 0.905 | 0.9064 | 0.0997 | 0.0211 |
| 0.5543 | 64.96 | 812 | 0.4228 | 0.905 | 0.1608 | 1.3771 | 0.905 | 0.9064 | 0.0995 | 0.0211 |
| 0.5543 | 66.0 | 825 | 0.4232 | 0.9 | 0.1611 | 1.3766 | 0.9 | 0.9013 | 0.1046 | 0.0212 |
| 0.5543 | 66.96 | 837 | 0.4236 | 0.9 | 0.1613 | 1.3768 | 0.9 | 0.9013 | 0.1045 | 0.0213 |
| 0.5543 | 68.0 | 850 | 0.4240 | 0.9 | 0.1615 | 1.3764 | 0.9 | 0.9013 | 0.1026 | 0.0213 |
| 0.5543 | 68.96 | 862 | 0.4243 | 0.9 | 0.1617 | 1.3765 | 0.9 | 0.9013 | 0.1043 | 0.0213 |
| 0.5543 | 70.0 | 875 | 0.4247 | 0.9 | 0.1619 | 1.3762 | 0.9 | 0.9013 | 0.1060 | 0.0213 |
| 0.5543 | 70.96 | 887 | 0.4249 | 0.9 | 0.1620 | 1.3762 | 0.9 | 0.9013 | 0.1077 | 0.0214 |
| 0.5543 | 72.0 | 900 | 0.4254 | 0.9 | 0.1623 | 1.3760 | 0.9 | 0.9013 | 0.1057 | 0.0214 |
| 0.5543 | 72.96 | 912 | 0.4257 | 0.9 | 0.1624 | 1.3760 | 0.9 | 0.9013 | 0.1074 | 0.0213 |
| 0.5543 | 74.0 | 925 | 0.4259 | 0.9 | 0.1625 | 1.3758 | 0.9 | 0.9013 | 0.1056 | 0.0213 |
| 0.5543 | 74.96 | 937 | 0.4262 | 0.9 | 0.1627 | 1.3758 | 0.9 | 0.9013 | 0.1056 | 0.0214 |
| 0.5543 | 76.0 | 950 | 0.4266 | 0.9 | 0.1629 | 1.3757 | 0.9 | 0.9013 | 0.1058 | 0.0216 |
| 0.5543 | 76.96 | 962 | 0.4268 | 0.9 | 0.1630 | 1.3756 | 0.9 | 0.9013 | 0.1057 | 0.0216 |
| 0.5543 | 78.0 | 975 | 0.4271 | 0.9 | 0.1631 | 1.3755 | 0.9 | 0.9013 | 0.1076 | 0.0216 |
| 0.5543 | 78.96 | 987 | 0.4274 | 0.9 | 0.1632 | 1.3756 | 0.9 | 0.9013 | 0.1075 | 0.0216 |
| 0.0526 | 80.0 | 1000 | 0.4275 | 0.9 | 0.1634 | 1.3754 | 0.9 | 0.9013 | 0.1100 | 0.0217 |
| 0.0526 | 80.96 | 1012 | 0.4278 | 0.9 | 0.1635 | 1.3754 | 0.9 | 0.9013 | 0.1099 | 0.0218 |
| 0.0526 | 82.0 | 1025 | 0.4280 | 0.9 | 0.1636 | 1.3753 | 0.9 | 0.9013 | 0.1098 | 0.0218 |
| 0.0526 | 82.96 | 1037 | 0.4282 | 0.9 | 0.1637 | 1.3753 | 0.9 | 0.9013 | 0.1098 | 0.0218 |
| 0.0526 | 84.0 | 1050 | 0.4284 | 0.9 | 0.1638 | 1.3753 | 0.9 | 0.9013 | 0.1097 | 0.0218 |
| 0.0526 | 84.96 | 1062 | 0.4286 | 0.9 | 0.1638 | 1.3753 | 0.9 | 0.9013 | 0.1097 | 0.0218 |
| 0.0526 | 86.0 | 1075 | 0.4288 | 0.9 | 0.1639 | 1.3752 | 0.9 | 0.9013 | 0.1096 | 0.0218 |
| 0.0526 | 86.96 | 1087 | 0.4289 | 0.9 | 0.1640 | 1.3752 | 0.9 | 0.9013 | 0.1096 | 0.0218 |
| 0.0526 | 88.0 | 1100 | 0.4291 | 0.9 | 0.1641 | 1.3752 | 0.9 | 0.9013 | 0.1095 | 0.0218 |
| 0.0526 | 88.96 | 1112 | 0.4292 | 0.9 | 0.1641 | 1.3752 | 0.9 | 0.9013 | 0.1095 | 0.0218 |
| 0.0526 | 90.0 | 1125 | 0.4293 | 0.9 | 0.1642 | 1.3752 | 0.9 | 0.9013 | 0.1074 | 0.0219 |
| 0.0526 | 90.96 | 1137 | 0.4294 | 0.9 | 0.1642 | 1.3752 | 0.9 | 0.9013 | 0.1075 | 0.0219 |
| 0.0526 | 92.0 | 1150 | 0.4295 | 0.9 | 0.1642 | 1.3752 | 0.9 | 0.9013 | 0.1075 | 0.0220 |
| 0.0526 | 92.96 | 1162 | 0.4295 | 0.9 | 0.1643 | 1.3752 | 0.9 | 0.9013 | 0.1075 | 0.0220 |
| 0.0526 | 94.0 | 1175 | 0.4296 | 0.9 | 0.1643 | 1.3751 | 0.9 | 0.9013 | 0.1075 | 0.0220 |
| 0.0526 | 94.96 | 1187 | 0.4296 | 0.9 | 0.1643 | 1.3751 | 0.9 | 0.9013 | 0.1075 | 0.0220 |
| 0.0526 | 96.0 | 1200 | 0.4296 | 0.9 | 0.1643 | 1.3751 | 0.9 | 0.9013 | 0.1074 | 0.0220 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
Trong-Nghia/bert-large-uncased-detect-dep-v5
|
Trong-Nghia
| 2023-07-24T12:47:16Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-large-uncased",
"base_model:finetune:google-bert/bert-large-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-23T05:52:18Z |
---
license: apache-2.0
base_model: bert-large-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-large-uncased-detect-dep-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-detect-dep-v5
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6567
- Accuracy: 0.735
- F1: 0.8030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6079 | 1.0 | 1502 | 0.5489 | 0.72 | 0.7764 |
| 0.5703 | 2.0 | 3004 | 0.5609 | 0.74 | 0.8110 |
| 0.5169 | 3.0 | 4506 | 0.6567 | 0.735 | 0.8030 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
FelixChao/vicuna-7b-instruct-ft-adapters-ESG-chatting
|
FelixChao
| 2023-07-24T12:46:22Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-24T12:46:14Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
sukiee/qlora-koalpaca-polyglot-5.8b-hotissue
|
sukiee
| 2023-07-24T12:41:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-24T12:41:37Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
jordyvl/dit-finetuned_rvl_tobacco_crl_allv2
|
jordyvl
| 2023-07-24T12:41:41Z | 167 | 0 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-24T11:40:18Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-finetuned_rvl_tobacco_crl_allv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-finetuned_rvl_tobacco_crl_allv2
This model is a fine-tuned version of [microsoft/dit-base-finetuned-rvlcdip](https://huggingface.co/microsoft/dit-base-finetuned-rvlcdip) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5457
- Accuracy: 0.93
- Brier Loss: 0.1595
- Nll: 0.7692
- F1 Micro: 0.93
- F1 Macro: 0.9154
- Ece: 0.2119
- Aurc: 0.0106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 3 | 2.3428 | 0.005 | 0.9059 | 8.8950 | 0.005 | 0.0048 | 0.1391 | 0.9945 |
| No log | 1.96 | 6 | 2.3437 | 0.005 | 0.9052 | 8.8527 | 0.005 | 0.0048 | 0.1390 | 0.9942 |
| No log | 2.96 | 9 | 2.3397 | 0.005 | 0.9038 | 8.6108 | 0.005 | 0.0048 | 0.1389 | 0.9940 |
| No log | 3.96 | 12 | 2.3292 | 0.01 | 0.9018 | 7.9364 | 0.01 | 0.0091 | 0.1413 | 0.9925 |
| No log | 4.96 | 15 | 2.3200 | 0.02 | 0.8991 | 7.1029 | 0.02 | 0.0303 | 0.1462 | 0.9876 |
| No log | 5.96 | 18 | 2.3074 | 0.05 | 0.8958 | 6.6583 | 0.0500 | 0.0527 | 0.1638 | 0.9775 |
| No log | 6.96 | 21 | 2.2903 | 0.155 | 0.8918 | 6.3637 | 0.155 | 0.1012 | 0.2321 | 0.9477 |
| No log | 7.96 | 24 | 2.2706 | 0.18 | 0.8869 | 6.0129 | 0.18 | 0.1088 | 0.2487 | 0.9279 |
| No log | 8.96 | 27 | 2.2479 | 0.195 | 0.8810 | 4.5304 | 0.195 | 0.1154 | 0.2509 | 0.8991 |
| No log | 9.96 | 30 | 2.2212 | 0.215 | 0.8734 | 3.2783 | 0.2150 | 0.1365 | 0.2504 | 0.8008 |
| No log | 10.96 | 33 | 2.1853 | 0.305 | 0.8631 | 2.6641 | 0.305 | 0.1889 | 0.3073 | 0.6426 |
| No log | 11.96 | 36 | 2.1391 | 0.405 | 0.8489 | 2.2824 | 0.405 | 0.2593 | 0.3747 | 0.4193 |
| No log | 12.96 | 39 | 2.0878 | 0.485 | 0.8327 | 1.8576 | 0.485 | 0.3408 | 0.4203 | 0.2962 |
| No log | 13.96 | 42 | 2.0311 | 0.58 | 0.8157 | 1.4206 | 0.58 | 0.4327 | 0.4908 | 0.2186 |
| No log | 14.96 | 45 | 1.9707 | 0.605 | 0.7970 | 1.2940 | 0.605 | 0.4601 | 0.4866 | 0.1784 |
| No log | 15.96 | 48 | 1.9196 | 0.64 | 0.7803 | 1.2372 | 0.64 | 0.4958 | 0.5098 | 0.1483 |
| No log | 16.96 | 51 | 1.8690 | 0.68 | 0.7630 | 1.1960 | 0.68 | 0.5548 | 0.5403 | 0.1286 |
| No log | 17.96 | 54 | 1.8192 | 0.73 | 0.7448 | 1.1405 | 0.7300 | 0.6227 | 0.5655 | 0.0954 |
| No log | 18.96 | 57 | 1.7691 | 0.8 | 0.7272 | 1.1234 | 0.8000 | 0.6965 | 0.6131 | 0.0618 |
| No log | 19.96 | 60 | 1.7210 | 0.835 | 0.7096 | 1.0912 | 0.835 | 0.7380 | 0.6351 | 0.0463 |
| No log | 20.96 | 63 | 1.6734 | 0.84 | 0.6912 | 1.0707 | 0.8400 | 0.7410 | 0.6305 | 0.0404 |
| No log | 21.96 | 66 | 1.6273 | 0.85 | 0.6724 | 1.0024 | 0.85 | 0.7458 | 0.6316 | 0.0335 |
| No log | 22.96 | 69 | 1.5782 | 0.86 | 0.6525 | 0.9764 | 0.8600 | 0.7612 | 0.6265 | 0.0289 |
| No log | 23.96 | 72 | 1.5293 | 0.87 | 0.6327 | 0.9580 | 0.87 | 0.7690 | 0.6211 | 0.0227 |
| No log | 24.96 | 75 | 1.4840 | 0.87 | 0.6131 | 0.9549 | 0.87 | 0.7730 | 0.6157 | 0.0223 |
| No log | 25.96 | 78 | 1.4425 | 0.88 | 0.5937 | 0.9521 | 0.88 | 0.7829 | 0.6121 | 0.0209 |
| No log | 26.96 | 81 | 1.4013 | 0.875 | 0.5740 | 0.9456 | 0.875 | 0.7799 | 0.5931 | 0.0216 |
| No log | 27.96 | 84 | 1.3626 | 0.875 | 0.5550 | 0.9392 | 0.875 | 0.7828 | 0.5840 | 0.0219 |
| No log | 28.96 | 87 | 1.3237 | 0.87 | 0.5360 | 0.9386 | 0.87 | 0.7802 | 0.5656 | 0.0225 |
| No log | 29.96 | 90 | 1.2847 | 0.875 | 0.5166 | 0.9371 | 0.875 | 0.7854 | 0.5605 | 0.0208 |
| No log | 30.96 | 93 | 1.2515 | 0.88 | 0.4993 | 0.8803 | 0.88 | 0.7875 | 0.5431 | 0.0205 |
| No log | 31.96 | 96 | 1.2196 | 0.89 | 0.4824 | 0.9276 | 0.89 | 0.8120 | 0.5328 | 0.0192 |
| No log | 32.96 | 99 | 1.1854 | 0.885 | 0.4648 | 0.9279 | 0.885 | 0.8020 | 0.5183 | 0.0195 |
| No log | 33.96 | 102 | 1.1523 | 0.895 | 0.4480 | 0.9823 | 0.895 | 0.8347 | 0.5018 | 0.0189 |
| No log | 34.96 | 105 | 1.1231 | 0.895 | 0.4324 | 0.9680 | 0.895 | 0.8347 | 0.4931 | 0.0187 |
| No log | 35.96 | 108 | 1.0944 | 0.905 | 0.4174 | 0.9642 | 0.905 | 0.8527 | 0.4880 | 0.0173 |
| No log | 36.96 | 111 | 1.0648 | 0.91 | 0.4020 | 0.9006 | 0.91 | 0.8720 | 0.4702 | 0.0183 |
| No log | 37.96 | 114 | 1.0380 | 0.925 | 0.3885 | 0.8823 | 0.925 | 0.9005 | 0.4695 | 0.0160 |
| No log | 38.96 | 117 | 1.0127 | 0.92 | 0.3758 | 0.8700 | 0.92 | 0.8954 | 0.4523 | 0.0171 |
| No log | 39.96 | 120 | 0.9882 | 0.915 | 0.3629 | 0.8668 | 0.915 | 0.8864 | 0.4399 | 0.0170 |
| No log | 40.96 | 123 | 0.9655 | 0.93 | 0.3509 | 0.8642 | 0.93 | 0.9048 | 0.4435 | 0.0150 |
| No log | 41.96 | 126 | 0.9452 | 0.925 | 0.3412 | 0.8563 | 0.925 | 0.9016 | 0.4240 | 0.0159 |
| No log | 42.96 | 129 | 0.9248 | 0.925 | 0.3312 | 0.8581 | 0.925 | 0.9016 | 0.4130 | 0.0160 |
| No log | 43.96 | 132 | 0.9037 | 0.935 | 0.3207 | 0.8536 | 0.935 | 0.9204 | 0.4066 | 0.0159 |
| No log | 44.96 | 135 | 0.8859 | 0.93 | 0.3120 | 0.8420 | 0.93 | 0.9154 | 0.3929 | 0.0166 |
| No log | 45.96 | 138 | 0.8695 | 0.93 | 0.3039 | 0.8342 | 0.93 | 0.9154 | 0.3947 | 0.0163 |
| No log | 46.96 | 141 | 0.8536 | 0.935 | 0.2959 | 0.8340 | 0.935 | 0.9204 | 0.3970 | 0.0152 |
| No log | 47.96 | 144 | 0.8381 | 0.935 | 0.2889 | 0.8297 | 0.935 | 0.9204 | 0.3877 | 0.0150 |
| No log | 48.96 | 147 | 0.8217 | 0.94 | 0.2810 | 0.8237 | 0.94 | 0.9269 | 0.3783 | 0.0147 |
| No log | 49.96 | 150 | 0.8060 | 0.94 | 0.2725 | 0.8241 | 0.94 | 0.9277 | 0.3762 | 0.0142 |
| No log | 50.96 | 153 | 0.7892 | 0.935 | 0.2642 | 0.8185 | 0.935 | 0.9245 | 0.3522 | 0.0139 |
| No log | 51.96 | 156 | 0.7750 | 0.94 | 0.2577 | 0.8128 | 0.94 | 0.9330 | 0.3512 | 0.0131 |
| No log | 52.96 | 159 | 0.7602 | 0.94 | 0.2517 | 0.8020 | 0.94 | 0.9330 | 0.3517 | 0.0135 |
| No log | 53.96 | 162 | 0.7457 | 0.94 | 0.2449 | 0.7927 | 0.94 | 0.9330 | 0.3443 | 0.0134 |
| No log | 54.96 | 165 | 0.7342 | 0.94 | 0.2393 | 0.8457 | 0.94 | 0.9330 | 0.3248 | 0.0130 |
| No log | 55.96 | 168 | 0.7235 | 0.94 | 0.2344 | 0.8500 | 0.94 | 0.9330 | 0.3244 | 0.0127 |
| No log | 56.96 | 171 | 0.7161 | 0.935 | 0.2303 | 0.8536 | 0.935 | 0.9214 | 0.3181 | 0.0129 |
| No log | 57.96 | 174 | 0.7052 | 0.935 | 0.2251 | 0.8537 | 0.935 | 0.9214 | 0.3122 | 0.0128 |
| No log | 58.96 | 177 | 0.6930 | 0.935 | 0.2192 | 0.8442 | 0.935 | 0.9214 | 0.3123 | 0.0121 |
| No log | 59.96 | 180 | 0.6830 | 0.94 | 0.2146 | 0.8339 | 0.94 | 0.9263 | 0.3003 | 0.0119 |
| No log | 60.96 | 183 | 0.6735 | 0.94 | 0.2108 | 0.8266 | 0.94 | 0.9263 | 0.2944 | 0.0120 |
| No log | 61.96 | 186 | 0.6654 | 0.935 | 0.2068 | 0.8249 | 0.935 | 0.9231 | 0.3001 | 0.0120 |
| No log | 62.96 | 189 | 0.6572 | 0.935 | 0.2029 | 0.8228 | 0.935 | 0.9231 | 0.2784 | 0.0115 |
| No log | 63.96 | 192 | 0.6526 | 0.935 | 0.2001 | 0.8257 | 0.935 | 0.9231 | 0.2749 | 0.0116 |
| No log | 64.96 | 195 | 0.6448 | 0.935 | 0.1977 | 0.8244 | 0.935 | 0.9244 | 0.2643 | 0.0118 |
| No log | 65.96 | 198 | 0.6366 | 0.935 | 0.1944 | 0.8169 | 0.935 | 0.9244 | 0.2607 | 0.0115 |
| No log | 66.96 | 201 | 0.6281 | 0.935 | 0.1908 | 0.8088 | 0.935 | 0.9231 | 0.2745 | 0.0113 |
| No log | 67.96 | 204 | 0.6206 | 0.935 | 0.1876 | 0.8037 | 0.935 | 0.9231 | 0.2784 | 0.0112 |
| No log | 68.96 | 207 | 0.6143 | 0.935 | 0.1853 | 0.8025 | 0.935 | 0.9231 | 0.2662 | 0.0112 |
| No log | 69.96 | 210 | 0.6102 | 0.94 | 0.1839 | 0.8010 | 0.94 | 0.9330 | 0.2584 | 0.0113 |
| No log | 70.96 | 213 | 0.6053 | 0.94 | 0.1822 | 0.7991 | 0.94 | 0.9330 | 0.2639 | 0.0113 |
| No log | 71.96 | 216 | 0.6000 | 0.94 | 0.1802 | 0.7949 | 0.94 | 0.9330 | 0.2516 | 0.0113 |
| No log | 72.96 | 219 | 0.5941 | 0.94 | 0.1773 | 0.7891 | 0.94 | 0.9330 | 0.2670 | 0.0112 |
| No log | 73.96 | 222 | 0.5888 | 0.94 | 0.1747 | 0.7846 | 0.94 | 0.9330 | 0.2457 | 0.0109 |
| No log | 74.96 | 225 | 0.5837 | 0.94 | 0.1724 | 0.7831 | 0.94 | 0.9330 | 0.2425 | 0.0107 |
| No log | 75.96 | 228 | 0.5801 | 0.94 | 0.1711 | 0.7857 | 0.94 | 0.9330 | 0.2411 | 0.0108 |
| No log | 76.96 | 231 | 0.5783 | 0.935 | 0.1708 | 0.7884 | 0.935 | 0.9231 | 0.2342 | 0.0110 |
| No log | 77.96 | 234 | 0.5767 | 0.93 | 0.1707 | 0.7882 | 0.93 | 0.9139 | 0.2375 | 0.0109 |
| No log | 78.96 | 237 | 0.5754 | 0.93 | 0.1703 | 0.7863 | 0.93 | 0.9154 | 0.2255 | 0.0109 |
| No log | 79.96 | 240 | 0.5732 | 0.93 | 0.1694 | 0.7848 | 0.93 | 0.9154 | 0.2365 | 0.0111 |
| No log | 80.96 | 243 | 0.5715 | 0.93 | 0.1685 | 0.7834 | 0.93 | 0.9154 | 0.2433 | 0.0111 |
| No log | 81.96 | 246 | 0.5675 | 0.93 | 0.1672 | 0.7803 | 0.93 | 0.9154 | 0.2423 | 0.0110 |
| No log | 82.96 | 249 | 0.5648 | 0.93 | 0.1661 | 0.7773 | 0.93 | 0.9154 | 0.2352 | 0.0108 |
| No log | 83.96 | 252 | 0.5624 | 0.93 | 0.1653 | 0.7753 | 0.93 | 0.9154 | 0.2343 | 0.0108 |
| No log | 84.96 | 255 | 0.5608 | 0.93 | 0.1646 | 0.7746 | 0.93 | 0.9154 | 0.2332 | 0.0108 |
| No log | 85.96 | 258 | 0.5589 | 0.93 | 0.1642 | 0.7748 | 0.93 | 0.9154 | 0.2310 | 0.0110 |
| No log | 86.96 | 261 | 0.5574 | 0.93 | 0.1638 | 0.7741 | 0.93 | 0.9154 | 0.2286 | 0.0111 |
| No log | 87.96 | 264 | 0.5559 | 0.93 | 0.1636 | 0.7739 | 0.93 | 0.9154 | 0.2274 | 0.0108 |
| No log | 88.96 | 267 | 0.5555 | 0.93 | 0.1631 | 0.7748 | 0.93 | 0.9154 | 0.2264 | 0.0109 |
| No log | 89.96 | 270 | 0.5539 | 0.93 | 0.1623 | 0.7743 | 0.93 | 0.9154 | 0.2255 | 0.0108 |
| No log | 90.96 | 273 | 0.5525 | 0.93 | 0.1616 | 0.7738 | 0.93 | 0.9154 | 0.2251 | 0.0108 |
| No log | 91.96 | 276 | 0.5508 | 0.93 | 0.1612 | 0.7726 | 0.93 | 0.9154 | 0.2254 | 0.0108 |
| No log | 92.96 | 279 | 0.5498 | 0.93 | 0.1610 | 0.7721 | 0.93 | 0.9154 | 0.2240 | 0.0107 |
| No log | 93.96 | 282 | 0.5497 | 0.93 | 0.1607 | 0.7717 | 0.93 | 0.9154 | 0.2141 | 0.0107 |
| No log | 94.96 | 285 | 0.5487 | 0.93 | 0.1605 | 0.7711 | 0.93 | 0.9154 | 0.2138 | 0.0107 |
| No log | 95.96 | 288 | 0.5473 | 0.93 | 0.1601 | 0.7704 | 0.93 | 0.9154 | 0.2129 | 0.0106 |
| No log | 96.96 | 291 | 0.5467 | 0.93 | 0.1599 | 0.7701 | 0.93 | 0.9154 | 0.2124 | 0.0106 |
| No log | 97.96 | 294 | 0.5462 | 0.93 | 0.1597 | 0.7698 | 0.93 | 0.9154 | 0.2121 | 0.0106 |
| No log | 98.96 | 297 | 0.5458 | 0.93 | 0.1596 | 0.7694 | 0.93 | 0.9154 | 0.2120 | 0.0106 |
| No log | 99.96 | 300 | 0.5457 | 0.93 | 0.1595 | 0.7692 | 0.93 | 0.9154 | 0.2119 | 0.0106 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
felixb85/LunarLander-v2
|
felixb85
| 2023-07-24T12:29:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-24T12:29:23Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.63 +/- 14.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ryan-sjtu/textual_inversion_cat
|
Ryan-sjtu
| 2023-07-24T12:18:51Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-24T10:43:00Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - Ryan-sjtu/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
abukashan/llama2-qlora-finetunined
|
abukashan
| 2023-07-24T11:55:09Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-24T11:54:53Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
polejowska/detr-r50-cd45rb-8ah-6l-128d
|
polejowska
| 2023-07-24T11:52:39Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cd45rb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-07-24T03:37:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cd45rb
model-index:
- name: detr-r50-cd45rb-8ah-6l-128d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-r50-cd45rb-8ah-6l-128d
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cd45rb dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.1894 | 1.0 | 4606 | 3.9730 |
| 3.4275 | 2.0 | 9212 | 3.5463 |
| 3.3617 | 3.0 | 13818 | 3.6702 |
| 3.3349 | 4.0 | 18424 | 3.2565 |
| 3.3174 | 5.0 | 23030 | 3.3137 |
| 3.2912 | 6.0 | 27636 | 3.6149 |
| 3.2733 | 7.0 | 32242 | 3.5332 |
| 3.2614 | 8.0 | 36848 | 3.5219 |
| 3.2479 | 9.0 | 41454 | 3.4650 |
| 3.2358 | 10.0 | 46060 | 3.4100 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Price11/bert_based-NO-SQLI
|
Price11
| 2023-07-24T11:37:24Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-24T10:42:25Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_based-NO-SQLI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_based-NO-SQLI
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0052 | 1.0 | 1000 | 0.0001 | 1.0 |
| 0.0 | 2.0 | 2000 | 0.0000 | 1.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.11.0
|
bagassword21/gonash
|
bagassword21
| 2023-07-24T11:19:40Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-20T16:30:32Z |
---
license: creativeml-openrail-m
---
|
jordyvl/dit-base_tobacco_crl_allv2
|
jordyvl
| 2023-07-24T10:40:15Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-24T09:42:04Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-base_tobacco_crl_allv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base_tobacco_crl_allv2
This model is a fine-tuned version of [jordyvl/dit-base_tobacco](https://huggingface.co/jordyvl/dit-base_tobacco) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3885
- Accuracy: 0.945
- Brier Loss: 0.1018
- Nll: 0.7205
- F1 Micro: 0.945
- F1 Macro: 0.9429
- Ece: 0.0554
- Aurc: 0.0107
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 3 | 0.3286 | 0.925 | 0.1161 | 1.0901 | 0.925 | 0.9187 | 0.0807 | 0.0113 |
| No log | 1.96 | 6 | 0.3344 | 0.93 | 0.1123 | 1.0906 | 0.93 | 0.9219 | 0.0760 | 0.0120 |
| No log | 2.96 | 9 | 0.3363 | 0.935 | 0.1092 | 1.0909 | 0.935 | 0.9319 | 0.0711 | 0.0135 |
| No log | 3.96 | 12 | 0.3530 | 0.935 | 0.1135 | 1.0883 | 0.935 | 0.9320 | 0.0753 | 0.0158 |
| No log | 4.96 | 15 | 0.3673 | 0.93 | 0.1170 | 1.0778 | 0.93 | 0.9247 | 0.0811 | 0.0162 |
| No log | 5.96 | 18 | 0.3583 | 0.93 | 0.1167 | 1.0706 | 0.93 | 0.9247 | 0.0708 | 0.0159 |
| No log | 6.96 | 21 | 0.3469 | 0.93 | 0.1121 | 1.0675 | 0.93 | 0.9247 | 0.0783 | 0.0148 |
| No log | 7.96 | 24 | 0.3357 | 0.935 | 0.1071 | 1.0654 | 0.935 | 0.9279 | 0.0724 | 0.0136 |
| No log | 8.96 | 27 | 0.3329 | 0.935 | 0.1048 | 1.0488 | 0.935 | 0.9279 | 0.0615 | 0.0127 |
| No log | 9.96 | 30 | 0.3354 | 0.94 | 0.1027 | 1.0178 | 0.94 | 0.9391 | 0.0636 | 0.0130 |
| No log | 10.96 | 33 | 0.3350 | 0.94 | 0.1018 | 1.0054 | 0.94 | 0.9418 | 0.0616 | 0.0136 |
| No log | 11.96 | 36 | 0.3342 | 0.94 | 0.1012 | 1.0160 | 0.94 | 0.9418 | 0.0632 | 0.0133 |
| No log | 12.96 | 39 | 0.3341 | 0.935 | 0.1002 | 1.0132 | 0.935 | 0.9318 | 0.0692 | 0.0135 |
| No log | 13.96 | 42 | 0.3427 | 0.93 | 0.1039 | 1.0032 | 0.93 | 0.9275 | 0.0644 | 0.0137 |
| No log | 14.96 | 45 | 0.3393 | 0.945 | 0.0985 | 0.9986 | 0.945 | 0.9406 | 0.0581 | 0.0122 |
| No log | 15.96 | 48 | 0.3304 | 0.94 | 0.0995 | 0.9934 | 0.94 | 0.9390 | 0.0575 | 0.0124 |
| No log | 16.96 | 51 | 0.3372 | 0.94 | 0.1010 | 0.9796 | 0.94 | 0.9390 | 0.0597 | 0.0127 |
| No log | 17.96 | 54 | 0.3399 | 0.94 | 0.1023 | 0.9591 | 0.94 | 0.9459 | 0.0603 | 0.0123 |
| No log | 18.96 | 57 | 0.3443 | 0.94 | 0.1044 | 0.9473 | 0.94 | 0.9459 | 0.0588 | 0.0122 |
| No log | 19.96 | 60 | 0.3491 | 0.94 | 0.1064 | 0.9401 | 0.94 | 0.9398 | 0.0617 | 0.0122 |
| No log | 20.96 | 63 | 0.3510 | 0.94 | 0.1081 | 0.9288 | 0.94 | 0.9398 | 0.0681 | 0.0131 |
| No log | 21.96 | 66 | 0.3485 | 0.94 | 0.1074 | 0.9111 | 0.94 | 0.9398 | 0.0628 | 0.0132 |
| No log | 22.96 | 69 | 0.3481 | 0.935 | 0.1056 | 0.8993 | 0.935 | 0.9382 | 0.0616 | 0.0132 |
| No log | 23.96 | 72 | 0.3605 | 0.935 | 0.1131 | 0.9013 | 0.935 | 0.9378 | 0.0684 | 0.0120 |
| No log | 24.96 | 75 | 0.3738 | 0.935 | 0.1159 | 0.9113 | 0.935 | 0.9377 | 0.0683 | 0.0117 |
| No log | 25.96 | 78 | 0.3657 | 0.935 | 0.1108 | 0.8932 | 0.935 | 0.9394 | 0.0690 | 0.0124 |
| No log | 26.96 | 81 | 0.3511 | 0.94 | 0.1060 | 0.8761 | 0.94 | 0.9446 | 0.0563 | 0.0120 |
| No log | 27.96 | 84 | 0.3375 | 0.94 | 0.1025 | 0.8662 | 0.94 | 0.9446 | 0.0602 | 0.0108 |
| No log | 28.96 | 87 | 0.3369 | 0.94 | 0.1019 | 0.8654 | 0.94 | 0.9446 | 0.0558 | 0.0090 |
| No log | 29.96 | 90 | 0.3423 | 0.94 | 0.1055 | 0.8602 | 0.94 | 0.9446 | 0.0610 | 0.0076 |
| No log | 30.96 | 93 | 0.3458 | 0.945 | 0.1065 | 0.8525 | 0.945 | 0.9474 | 0.0605 | 0.0078 |
| No log | 31.96 | 96 | 0.3436 | 0.945 | 0.1035 | 0.8390 | 0.945 | 0.9490 | 0.0591 | 0.0082 |
| No log | 32.96 | 99 | 0.3436 | 0.94 | 0.1025 | 0.8294 | 0.94 | 0.9397 | 0.0574 | 0.0086 |
| No log | 33.96 | 102 | 0.3481 | 0.94 | 0.0990 | 0.8225 | 0.94 | 0.9398 | 0.0579 | 0.0102 |
| No log | 34.96 | 105 | 0.3519 | 0.945 | 0.0965 | 0.8203 | 0.945 | 0.9491 | 0.0576 | 0.0109 |
| No log | 35.96 | 108 | 0.3551 | 0.945 | 0.0939 | 0.8213 | 0.945 | 0.9491 | 0.0547 | 0.0116 |
| No log | 36.96 | 111 | 0.3611 | 0.95 | 0.0945 | 0.8193 | 0.9500 | 0.9519 | 0.0556 | 0.0117 |
| No log | 37.96 | 114 | 0.3678 | 0.94 | 0.1037 | 0.8166 | 0.94 | 0.9446 | 0.0591 | 0.0116 |
| No log | 38.96 | 117 | 0.3740 | 0.94 | 0.1086 | 0.8226 | 0.94 | 0.9446 | 0.0588 | 0.0112 |
| No log | 39.96 | 120 | 0.3754 | 0.94 | 0.1106 | 0.8328 | 0.94 | 0.9446 | 0.0631 | 0.0114 |
| No log | 40.96 | 123 | 0.3699 | 0.94 | 0.1097 | 0.8241 | 0.94 | 0.9446 | 0.0584 | 0.0116 |
| No log | 41.96 | 126 | 0.3606 | 0.94 | 0.1051 | 0.8010 | 0.94 | 0.9446 | 0.0550 | 0.0122 |
| No log | 42.96 | 129 | 0.3548 | 0.94 | 0.0970 | 0.7939 | 0.94 | 0.9447 | 0.0603 | 0.0130 |
| No log | 43.96 | 132 | 0.3533 | 0.95 | 0.0948 | 0.7902 | 0.9500 | 0.9522 | 0.0589 | 0.0131 |
| No log | 44.96 | 135 | 0.3588 | 0.945 | 0.0973 | 0.7818 | 0.945 | 0.9478 | 0.0540 | 0.0128 |
| No log | 45.96 | 138 | 0.3634 | 0.945 | 0.1011 | 0.7802 | 0.945 | 0.9478 | 0.0566 | 0.0124 |
| No log | 46.96 | 141 | 0.3642 | 0.945 | 0.1018 | 0.7813 | 0.945 | 0.9478 | 0.0564 | 0.0108 |
| No log | 47.96 | 144 | 0.3624 | 0.945 | 0.1018 | 0.7858 | 0.945 | 0.9478 | 0.0568 | 0.0104 |
| No log | 48.96 | 147 | 0.3653 | 0.945 | 0.1011 | 0.7949 | 0.945 | 0.9478 | 0.0570 | 0.0110 |
| No log | 49.96 | 150 | 0.3697 | 0.945 | 0.1022 | 0.8296 | 0.945 | 0.9478 | 0.0565 | 0.0110 |
| No log | 50.96 | 153 | 0.3705 | 0.945 | 0.1025 | 0.8677 | 0.945 | 0.9478 | 0.0558 | 0.0111 |
| No log | 51.96 | 156 | 0.3753 | 0.945 | 0.1042 | 0.7933 | 0.945 | 0.9478 | 0.0553 | 0.0106 |
| No log | 52.96 | 159 | 0.3763 | 0.945 | 0.1038 | 0.7869 | 0.945 | 0.9478 | 0.0580 | 0.0112 |
| No log | 53.96 | 162 | 0.3735 | 0.95 | 0.1007 | 0.7751 | 0.9500 | 0.9522 | 0.0551 | 0.0114 |
| No log | 54.96 | 165 | 0.3713 | 0.95 | 0.0995 | 0.7660 | 0.9500 | 0.9522 | 0.0551 | 0.0112 |
| No log | 55.96 | 168 | 0.3709 | 0.95 | 0.0985 | 0.7592 | 0.9500 | 0.9522 | 0.0540 | 0.0108 |
| No log | 56.96 | 171 | 0.3747 | 0.95 | 0.0985 | 0.7580 | 0.9500 | 0.9522 | 0.0530 | 0.0119 |
| No log | 57.96 | 174 | 0.3793 | 0.95 | 0.0992 | 0.7567 | 0.9500 | 0.9522 | 0.0532 | 0.0121 |
| No log | 58.96 | 177 | 0.3802 | 0.95 | 0.0992 | 0.7519 | 0.9500 | 0.9522 | 0.0540 | 0.0115 |
| No log | 59.96 | 180 | 0.3815 | 0.95 | 0.1007 | 0.7485 | 0.9500 | 0.9522 | 0.0545 | 0.0108 |
| No log | 60.96 | 183 | 0.3873 | 0.945 | 0.1069 | 0.7489 | 0.945 | 0.9478 | 0.0572 | 0.0098 |
| No log | 61.96 | 186 | 0.3883 | 0.945 | 0.1070 | 0.7477 | 0.945 | 0.9478 | 0.0564 | 0.0097 |
| No log | 62.96 | 189 | 0.3818 | 0.945 | 0.1053 | 0.7451 | 0.945 | 0.9478 | 0.0561 | 0.0096 |
| No log | 63.96 | 192 | 0.3745 | 0.945 | 0.1048 | 0.7446 | 0.945 | 0.9478 | 0.0586 | 0.0101 |
| No log | 64.96 | 195 | 0.3762 | 0.945 | 0.1030 | 0.8090 | 0.945 | 0.9478 | 0.0576 | 0.0103 |
| No log | 65.96 | 198 | 0.3822 | 0.95 | 0.1025 | 0.8092 | 0.9500 | 0.9522 | 0.0564 | 0.0104 |
| No log | 66.96 | 201 | 0.3896 | 0.95 | 0.1030 | 0.8112 | 0.9500 | 0.9522 | 0.0566 | 0.0103 |
| No log | 67.96 | 204 | 0.3914 | 0.945 | 0.1036 | 0.8095 | 0.945 | 0.9490 | 0.0586 | 0.0102 |
| No log | 68.96 | 207 | 0.3900 | 0.945 | 0.1043 | 0.8060 | 0.945 | 0.9490 | 0.0585 | 0.0097 |
| No log | 69.96 | 210 | 0.3903 | 0.945 | 0.1059 | 0.7370 | 0.945 | 0.9490 | 0.0586 | 0.0099 |
| No log | 70.96 | 213 | 0.3923 | 0.94 | 0.1069 | 0.7327 | 0.94 | 0.9446 | 0.0568 | 0.0096 |
| No log | 71.96 | 216 | 0.3894 | 0.94 | 0.1070 | 0.7316 | 0.94 | 0.9446 | 0.0611 | 0.0094 |
| No log | 72.96 | 219 | 0.3847 | 0.94 | 0.1053 | 0.7318 | 0.94 | 0.9446 | 0.0607 | 0.0100 |
| No log | 73.96 | 222 | 0.3833 | 0.94 | 0.1043 | 0.7315 | 0.94 | 0.9446 | 0.0603 | 0.0105 |
| No log | 74.96 | 225 | 0.3822 | 0.935 | 0.1041 | 0.7310 | 0.935 | 0.9353 | 0.0620 | 0.0101 |
| No log | 75.96 | 228 | 0.3771 | 0.945 | 0.1026 | 0.7314 | 0.945 | 0.9429 | 0.0552 | 0.0100 |
| No log | 76.96 | 231 | 0.3748 | 0.945 | 0.1014 | 0.7322 | 0.945 | 0.9429 | 0.0569 | 0.0100 |
| No log | 77.96 | 234 | 0.3759 | 0.945 | 0.1010 | 0.7332 | 0.945 | 0.9429 | 0.0556 | 0.0104 |
| No log | 78.96 | 237 | 0.3775 | 0.945 | 0.1009 | 0.7346 | 0.945 | 0.9429 | 0.0546 | 0.0107 |
| No log | 79.96 | 240 | 0.3784 | 0.94 | 0.1012 | 0.7343 | 0.94 | 0.9398 | 0.0558 | 0.0108 |
| No log | 80.96 | 243 | 0.3797 | 0.94 | 0.1013 | 0.7340 | 0.94 | 0.9398 | 0.0559 | 0.0109 |
| No log | 81.96 | 246 | 0.3821 | 0.94 | 0.1012 | 0.7359 | 0.94 | 0.9398 | 0.0578 | 0.0109 |
| No log | 82.96 | 249 | 0.3836 | 0.94 | 0.1011 | 0.7332 | 0.94 | 0.9398 | 0.0576 | 0.0108 |
| No log | 83.96 | 252 | 0.3844 | 0.94 | 0.1009 | 0.7318 | 0.94 | 0.9398 | 0.0574 | 0.0106 |
| No log | 84.96 | 255 | 0.3859 | 0.94 | 0.1009 | 0.7316 | 0.94 | 0.9398 | 0.0572 | 0.0106 |
| No log | 85.96 | 258 | 0.3885 | 0.94 | 0.1012 | 0.7312 | 0.94 | 0.9398 | 0.0546 | 0.0106 |
| No log | 86.96 | 261 | 0.3898 | 0.945 | 0.1015 | 0.7292 | 0.945 | 0.9429 | 0.0546 | 0.0106 |
| No log | 87.96 | 264 | 0.3905 | 0.945 | 0.1018 | 0.7265 | 0.945 | 0.9429 | 0.0560 | 0.0108 |
| No log | 88.96 | 267 | 0.3909 | 0.945 | 0.1020 | 0.7239 | 0.945 | 0.9429 | 0.0558 | 0.0106 |
| No log | 89.96 | 270 | 0.3903 | 0.945 | 0.1018 | 0.7219 | 0.945 | 0.9429 | 0.0559 | 0.0105 |
| No log | 90.96 | 273 | 0.3895 | 0.945 | 0.1017 | 0.7208 | 0.945 | 0.9429 | 0.0559 | 0.0105 |
| No log | 91.96 | 276 | 0.3891 | 0.945 | 0.1017 | 0.7202 | 0.945 | 0.9429 | 0.0562 | 0.0104 |
| No log | 92.96 | 279 | 0.3890 | 0.945 | 0.1017 | 0.7201 | 0.945 | 0.9429 | 0.0564 | 0.0106 |
| No log | 93.96 | 282 | 0.3889 | 0.945 | 0.1018 | 0.7202 | 0.945 | 0.9429 | 0.0554 | 0.0105 |
| No log | 94.96 | 285 | 0.3883 | 0.945 | 0.1016 | 0.7206 | 0.945 | 0.9429 | 0.0555 | 0.0105 |
| No log | 95.96 | 288 | 0.3880 | 0.945 | 0.1016 | 0.7210 | 0.945 | 0.9429 | 0.0556 | 0.0107 |
| No log | 96.96 | 291 | 0.3880 | 0.945 | 0.1016 | 0.7209 | 0.945 | 0.9429 | 0.0555 | 0.0107 |
| No log | 97.96 | 294 | 0.3882 | 0.945 | 0.1017 | 0.7207 | 0.945 | 0.9429 | 0.0555 | 0.0107 |
| No log | 98.96 | 297 | 0.3884 | 0.945 | 0.1017 | 0.7205 | 0.945 | 0.9429 | 0.0554 | 0.0107 |
| No log | 99.96 | 300 | 0.3885 | 0.945 | 0.1018 | 0.7205 | 0.945 | 0.9429 | 0.0554 | 0.0107 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gFulvio/moralstories-roberta-action.context-cls
|
gFulvio
| 2023-07-24T10:36:17Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"dataset:demelin/moral_stories",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-17T17:55:50Z |
---
datasets:
- demelin/moral_stories
pipeline_tag: text-classification
---
|
ShekDass/donut-base-cord-smart
|
ShekDass
| 2023-07-24T10:29:51Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base-finetuned-cord-v2",
"base_model:finetune:naver-clova-ix/donut-base-finetuned-cord-v2",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-07-24T09:49:19Z |
---
license: mit
base_model: naver-clova-ix/donut-base-finetuned-cord-v2
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-cord-smart
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-cord-smart
This model is a fine-tuned version of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jondurbin/airoboros-l2-70b-gpt4-1.4.1-peft
|
jondurbin
| 2023-07-24T10:25:19Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2023-07-23T11:58:24Z |
---
license: other
---
Adapter model for https://huggingface.co/jondurbin/airoboros-l2-70b-gpt4-1.4.1
|
jondurbin/airoboros-l2-7b-gpt4-1.4.1-peft
|
jondurbin
| 2023-07-24T10:24:09Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2023-07-23T10:08:24Z |
---
license: other
---
Adapter model for: https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-1.4.1
|
YeungNLP/firefly-llama-13b
|
YeungNLP
| 2023-07-24T10:23:55Z | 1,428 | 5 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T06:16:38Z |
该模型使用llama-13b,使用UltraChat数据集进行指令微调,约140万多轮对话数据。仅需一张显卡即可完成训练。
firefly-llama-13b在🤗Hugging Face的Open LLM榜单上进行了客观的评测。
在榜单上,firefly-llama-13b取得了不错的效果,比vicuna-13b-1.1略高0.2分,比llama-2-13b-chat略低0.5分,比vicuna-13b-v1.3略低0.6分。从评测分数来看,firefly-llama-13b与vicuna-13b、llama-2-13b-chat的水平非常接近😎。
| 模型 | Average | ARC | HellaSwag | MMLU | TruthfulQA (MC) |
|--------------------------------------------------------------------------------|-------|----------------------|------------|------------|------|
| Llama-2-70b-chat-hf | 66.8 | 64.6 | 85.9 | 63.9 | 52.8 |
| vicuna-13b-v1.3 | 60 | 54.6 | 80.4 | 52.9 | 52.1 |
| Llama-2-13b-chat-hf | 59.9 | 59 | 81.9 | 54.6 | 44.1 |
| firefly-llama-13b |59.4 | 59 | 79.7 | 49.1 | 49.6 |
| vicuna-13b-1.1 | 59.2 | 52.7 | 80.1 |51.9 | 52.1 |
| guanaco-13B-HF | 59.1 | 57.8 | 83.8 |48.3 | 46.7|
值得注意的是,vicuna-13b模型采用的是全量参数微调,对训练资源的要求十分高。而firefly-llama-13b采用的则是QLoRA微调,最少仅需16G显存,即可对13B的模型进行微调。
详细介绍见文章:[Firefly单卡复刻Vicuna-13B,Open LLM榜单🤗略高0.2分](https://mp.weixin.qq.com/s/QG2YMo_QxaxS_Rr2yJrIeA)
更多详情见[Firefly项目](https://github.com/yangjianxin1/Firefly)
[Open LLM排行榜](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
PriyanshS/DRL_Lab4
|
PriyanshS
| 2023-07-24T10:23:51Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-23T12:21:30Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: DRL_Lab4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 80.30 +/- 32.54
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
YarramsettiNaresh/Reinforce-Pixelcopter-PLE-v0
|
YarramsettiNaresh
| 2023-07-24T10:11:59Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-22T09:06:59Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 34.60 +/- 25.96
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
nicotaroni/model_experiment_2
|
nicotaroni
| 2023-07-24T10:09:46Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-24T09:56:36Z |
---
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_trainer
model-index:
- name: model_experiment_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_experiment_2
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0521
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.056 | 1.0 | 1989 | 0.0521 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
volodymyrs/upload_test
|
volodymyrs
| 2023-07-24T10:09:22Z | 0 | 0 |
allennlp
|
[
"allennlp",
"audio",
"automatic-speech-recognition",
"speech",
"fr",
"en",
"license:apache-2.0",
"region:us"
] |
automatic-speech-recognition
| 2023-07-19T13:25:39Z |
---
# Example metadata to be added to a model card.
# Full model card template at https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md
language:
- fr
- en
license: apache-2.0
library_name: allennlp
tags:
- audio
- automatic-speech-recognition
- speech
- allennlp
---
This markdown file contains the spec for the modelcard metadata regarding evaluation parameters. When present, and only then, 'model-index', 'datasets' and 'license' contents will be verified when git pushing changes to your README.md file.
Valid license identifiers can be found in [our docs](https://huggingface.co/docs/hub/repositories-licenses).
For the full model card template, see: [https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md).
|
LarryAIDraw/KISS_V1
|
LarryAIDraw
| 2023-07-24T10:03:46Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-24T09:57:15Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/38407/straddling-kiss-or-lora
|
LarryAIDraw/amzn-000019
|
LarryAIDraw
| 2023-07-24T10:03:34Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-24T09:55:32Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/21817/amazon-position-or-sex-act-lora-297
|
PriyanshS/DRL_Lab4_2
|
PriyanshS
| 2023-07-24T09:58:16Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-23T12:25:27Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: DRL_Lab4_2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 8.80 +/- 8.54
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
prognosis/llama2-test
|
prognosis
| 2023-07-24T09:53:41Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-24T09:52:48Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.