modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 18:27:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 18:22:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Jinheng-2002/lora-pokemon-2-831
|
Jinheng-2002
| 2023-09-01T02:17:13Z | 0 | 0 | null |
[
"tensorboard",
"stable-diffusion",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"region:us"
] | null | 2023-09-01T02:03:51Z |
---
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
inference: true
---
|
AdanLee/ppo-LunarLander-v2
|
AdanLee
| 2023-09-01T02:11:32Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-14T03:29:18Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 287.42 +/- 19.54
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
import gymnasium
from stable_baselines3 import PPO
from stable_baselines3.common.env_util import make_vec_env
from stable_baselines3.common.evaluation import evaluate_policy
from stable_baselines3.common.monitor import Monitor
from huggingface_sb3 import load_from_hub
repo_id = "AdanLee/ppo-LunarLander-v2" # The repo_id
filename = "ppo-LunarLander-v2.zip" # The model filename.zip
# When the model was trained on Python 3.8 the pickle protocol is 5
# But Python 3.6, 3.7 use protocol 4
# In order to get compatibility we need to:
# 1. Install pickle5 (we done it at the beginning of the colab)
# 2. Create a custom empty object we pass as parameter to PPO.load()
custom_objects = {
"learning_rate": 0.0,
"lr_schedule": lambda _: 0.0,
"clip_range": lambda _: 0.0,
}
checkpoint = load_from_hub(repo_id, filename)
model = PPO.load(checkpoint, custom_objects=custom_objects, print_system_info=True)
eval_env = Monitor(gym.make("LunarLander-v2"))
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
...
```
|
tkoyama/marian-finetuned-kde4-en-to-fr
|
tkoyama
| 2023-09-01T01:50:54Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-31T23:37:38Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.92454808849736
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8556
- Bleu: 52.9245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
zzzotop/low-resource-data-quality-classification-demo-cat
|
zzzotop
| 2023-09-01T01:43:26Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T21:12:01Z |
Demo exploring, amongst other things, the extent to which low-resource languages have poorer quality data (in terms of both tagging and more general usefulness) than high-resource counterparts. Inspired by the estimate that error rate of tagging in the corpus used was 10% higher in the LRL than it was in the HRL (Zotova et al 2020). Also demonstrated is cross-lingual transfer, akin to my earlier demos.
BETO (dccuchile/bert-base-spanish-wwm-cased) finetuned for text classification on the Catalan portion of the Catalonia Independence Corpus (CIC) for 5 epochs. All Catalonian text entered will be classified as either in favour of, against, or neutral towards Catalonian independence. Significant preprocessing of dataset involved, including removal of the validation set and the reassignment of its data to the train and test sets. Learning rate 2e-5, batch size 4, weight decay 0.1.
<b>Works best with long inputs, seems to associate topics about change and modernity with 'FAVOR' and those about history with 'AGAINST'. Generally skews 'AGAINST', probably overfitted.</b>
Evaluated every epoch using F1 score with macro averaging:<br>
5 epochs: 0.716673<br>
10 epochs: 0.719966<br>
20 epochs (final): 0.740322
|
menoua/a2c-PandaReachDense-v2
|
menoua
| 2023-09-01T01:41:43Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-19T00:50:38Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.39 +/- 0.39
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
|
dkqjrm/20230901052720
|
dkqjrm
| 2023-09-01T01:32:27Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T20:27:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230901052720'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230901052720
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1565
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 340 | 0.1594 | 0.5 |
| 0.1901 | 2.0 | 680 | 0.1620 | 0.5 |
| 0.1693 | 3.0 | 1020 | 0.1564 | 0.5 |
| 0.1693 | 4.0 | 1360 | 0.1563 | 0.5 |
| 0.1657 | 5.0 | 1700 | 0.1575 | 0.5 |
| 0.1638 | 6.0 | 2040 | 0.1594 | 0.5 |
| 0.1638 | 7.0 | 2380 | 0.1557 | 0.5 |
| 0.1632 | 8.0 | 2720 | 0.1568 | 0.5 |
| 0.1621 | 9.0 | 3060 | 0.1606 | 0.5 |
| 0.1621 | 10.0 | 3400 | 0.1614 | 0.5 |
| 0.1661 | 11.0 | 3740 | 0.1569 | 0.5 |
| 0.1641 | 12.0 | 4080 | 0.1570 | 0.5 |
| 0.1641 | 13.0 | 4420 | 0.1555 | 0.5 |
| 0.1582 | 14.0 | 4760 | 0.1627 | 0.5 |
| 0.1598 | 15.0 | 5100 | 0.1558 | 0.5 |
| 0.1598 | 16.0 | 5440 | 0.1557 | 0.5 |
| 0.16 | 17.0 | 5780 | 0.1558 | 0.5 |
| 0.1571 | 18.0 | 6120 | 0.1560 | 0.5 |
| 0.1571 | 19.0 | 6460 | 0.1553 | 0.5 |
| 0.1594 | 20.0 | 6800 | 0.1556 | 0.5 |
| 0.1581 | 21.0 | 7140 | 0.1635 | 0.5 |
| 0.1581 | 22.0 | 7480 | 0.1562 | 0.5 |
| 0.1585 | 23.0 | 7820 | 0.1578 | 0.5 |
| 0.1574 | 24.0 | 8160 | 0.1561 | 0.5 |
| 0.1585 | 25.0 | 8500 | 0.1561 | 0.5 |
| 0.1585 | 26.0 | 8840 | 0.1567 | 0.5 |
| 0.1573 | 27.0 | 9180 | 0.1559 | 0.5 |
| 0.1569 | 28.0 | 9520 | 0.1624 | 0.5 |
| 0.1569 | 29.0 | 9860 | 0.1559 | 0.5 |
| 0.1578 | 30.0 | 10200 | 0.1570 | 0.5 |
| 0.1569 | 31.0 | 10540 | 0.1598 | 0.5 |
| 0.1569 | 32.0 | 10880 | 0.1564 | 0.5 |
| 0.1569 | 33.0 | 11220 | 0.1611 | 0.5 |
| 0.1567 | 34.0 | 11560 | 0.1578 | 0.5 |
| 0.1567 | 35.0 | 11900 | 0.1567 | 0.5 |
| 0.1573 | 36.0 | 12240 | 0.1562 | 0.5 |
| 0.1564 | 37.0 | 12580 | 0.1574 | 0.5 |
| 0.1564 | 38.0 | 12920 | 0.1609 | 0.5 |
| 0.1553 | 39.0 | 13260 | 0.1574 | 0.5 |
| 0.156 | 40.0 | 13600 | 0.1578 | 0.5 |
| 0.156 | 41.0 | 13940 | 0.1580 | 0.5 |
| 0.1564 | 42.0 | 14280 | 0.1589 | 0.5 |
| 0.1551 | 43.0 | 14620 | 0.1564 | 0.5 |
| 0.1551 | 44.0 | 14960 | 0.1579 | 0.5 |
| 0.1563 | 45.0 | 15300 | 0.1569 | 0.5 |
| 0.1555 | 46.0 | 15640 | 0.1564 | 0.5 |
| 0.1555 | 47.0 | 15980 | 0.1558 | 0.5 |
| 0.1568 | 48.0 | 16320 | 0.1569 | 0.5 |
| 0.1554 | 49.0 | 16660 | 0.1560 | 0.5 |
| 0.1558 | 50.0 | 17000 | 0.1571 | 0.5 |
| 0.1558 | 51.0 | 17340 | 0.1564 | 0.5 |
| 0.1554 | 52.0 | 17680 | 0.1565 | 0.5 |
| 0.1567 | 53.0 | 18020 | 0.1573 | 0.5 |
| 0.1567 | 54.0 | 18360 | 0.1567 | 0.5 |
| 0.1556 | 55.0 | 18700 | 0.1563 | 0.5 |
| 0.1555 | 56.0 | 19040 | 0.1566 | 0.5 |
| 0.1555 | 57.0 | 19380 | 0.1561 | 0.5 |
| 0.1551 | 58.0 | 19720 | 0.1559 | 0.5 |
| 0.156 | 59.0 | 20060 | 0.1571 | 0.5 |
| 0.156 | 60.0 | 20400 | 0.1561 | 0.5 |
| 0.155 | 61.0 | 20740 | 0.1569 | 0.5 |
| 0.1548 | 62.0 | 21080 | 0.1561 | 0.5 |
| 0.1548 | 63.0 | 21420 | 0.1561 | 0.5 |
| 0.1542 | 64.0 | 21760 | 0.1584 | 0.5 |
| 0.1562 | 65.0 | 22100 | 0.1566 | 0.5 |
| 0.1562 | 66.0 | 22440 | 0.1565 | 0.5 |
| 0.1528 | 67.0 | 22780 | 0.1562 | 0.5 |
| 0.1562 | 68.0 | 23120 | 0.1566 | 0.5 |
| 0.1562 | 69.0 | 23460 | 0.1562 | 0.5 |
| 0.155 | 70.0 | 23800 | 0.1568 | 0.5 |
| 0.1544 | 71.0 | 24140 | 0.1566 | 0.5 |
| 0.1544 | 72.0 | 24480 | 0.1561 | 0.5 |
| 0.1543 | 73.0 | 24820 | 0.1562 | 0.5 |
| 0.1546 | 74.0 | 25160 | 0.1563 | 0.5 |
| 0.1542 | 75.0 | 25500 | 0.1563 | 0.5 |
| 0.1542 | 76.0 | 25840 | 0.1565 | 0.5 |
| 0.1548 | 77.0 | 26180 | 0.1566 | 0.5 |
| 0.1543 | 78.0 | 26520 | 0.1563 | 0.5 |
| 0.1543 | 79.0 | 26860 | 0.1567 | 0.5 |
| 0.1542 | 80.0 | 27200 | 0.1565 | 0.5 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mke10/distilbert-base-uncased-finetuned-cola
|
mke10
| 2023-09-01T01:08:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-29T15:20:28Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5357575991513603
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8210
- Matthews Correlation: 0.5358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.528 | 1.0 | 535 | 0.4763 | 0.4464 |
| 0.3571 | 2.0 | 1070 | 0.5180 | 0.4975 |
| 0.2304 | 3.0 | 1605 | 0.6082 | 0.5137 |
| 0.1765 | 4.0 | 2140 | 0.7750 | 0.5255 |
| 0.1316 | 5.0 | 2675 | 0.8210 | 0.5358 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Plumbear/distilhubert-finetuned-gtzan
|
Plumbear
| 2023-09-01T01:04:56Z | 167 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-30T19:49:21Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.86
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5698
- Accuracy: 0.86
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5773 | 1.0 | 75 | 0.7146 | 0.84 |
| 0.4322 | 2.0 | 150 | 0.6362 | 0.82 |
| 0.445 | 3.0 | 225 | 0.5768 | 0.88 |
| 0.2764 | 4.0 | 300 | 0.5698 | 0.86 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
jaober/ppo-LunarLander-v2
|
jaober
| 2023-09-01T00:47:10Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-01T00:46:50Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.42 +/- 20.73
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Dawnstarhunter/DialoGPT-medium-Eveline
|
Dawnstarhunter
| 2023-09-01T00:12:55Z | 151 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-31T23:43:42Z |
---
tags:
- conversational
---
|
kearney/message-genre
|
kearney
| 2023-09-01T00:01:03Z | 106 | 2 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T20:41:51Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: message-genre
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# message-genre
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5875
- Accuracy: 0.4339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.06 | 100 | 1.8239 | 0.3638 |
| No log | 0.13 | 200 | 1.7266 | 0.3971 |
| No log | 0.19 | 300 | 1.6873 | 0.4040 |
| No log | 0.25 | 400 | 1.6609 | 0.4188 |
| 1.8118 | 0.32 | 500 | 1.6674 | 0.4048 |
| 1.8118 | 0.38 | 600 | 1.6381 | 0.4172 |
| 1.8118 | 0.45 | 700 | 1.6437 | 0.4156 |
| 1.8118 | 0.51 | 800 | 1.6378 | 0.4143 |
| 1.8118 | 0.57 | 900 | 1.6301 | 0.4214 |
| 1.6738 | 0.64 | 1000 | 1.6106 | 0.4320 |
| 1.6738 | 0.7 | 1100 | 1.6089 | 0.4259 |
| 1.6738 | 0.76 | 1200 | 1.5988 | 0.4299 |
| 1.6738 | 0.83 | 1300 | 1.5951 | 0.4347 |
| 1.6738 | 0.89 | 1400 | 1.5896 | 0.4320 |
| 1.6488 | 0.96 | 1500 | 1.5875 | 0.4339 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
fapont/lora-trained-xl-colab-gerard-2
|
fapont
| 2023-08-31T23:49:38Z | 4 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-31T23:23:34Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of gerard atero
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - fapont/lora-trained-xl-colab-gerard-2
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of gerard atero using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
jcrkn/wav2vec2-large-xls-r-300m-bretonwelsh-colab
|
jcrkn
| 2023-08-31T23:43:57Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_13_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-30T19:37:36Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-bretonwelsh-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_13_0
type: common_voice_13_0
config: cy
split: test
args: cy
metrics:
- name: Wer
type: wer
value: 0.29761332022507164
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bretonwelsh-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4506
- Wer: 0.2976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9503 | 0.98 | 800 | 0.8330 | 0.7296 |
| 0.6531 | 1.95 | 1600 | 0.5592 | 0.5470 |
| 0.4637 | 2.93 | 2400 | 0.4711 | 0.4539 |
| 0.3449 | 3.91 | 3200 | 0.4484 | 0.4116 |
| 0.2694 | 4.88 | 4000 | 0.4313 | 0.3860 |
| 0.2087 | 5.86 | 4800 | 0.4115 | 0.3616 |
| 0.1649 | 6.84 | 5600 | 0.4105 | 0.3378 |
| 0.1313 | 7.81 | 6400 | 0.4409 | 0.3236 |
| 0.1079 | 8.79 | 7200 | 0.4402 | 0.3093 |
| 0.0897 | 9.77 | 8000 | 0.4506 | 0.2976 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
yaystevek/a2c-PandaPickAndPlace-v3
|
yaystevek
| 2023-08-31T23:34:05Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-31T23:28:22Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -50.00 +/- 0.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
TomyAI/test
|
TomyAI
| 2023-08-31T23:19:42Z | 0 | 2 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-13T09:52:31Z |
---
license: creativeml-openrail-m
---
|
davera-017/a2c-PandaReachDense-v3
|
davera-017
| 2023-08-31T23:09:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-31T23:04:17Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.21 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Snearec/detectorMalezasYolo8
|
Snearec
| 2023-08-31T23:05:04Z | 3 | 0 |
ultralytics
|
[
"ultralytics",
"ultralyticsplus",
"yolov8",
"yolo",
"vision",
"object-detection",
"pytorch",
"model-index",
"region:us"
] |
object-detection
| 2023-06-15T14:05:53Z |
---
tags:
- ultralyticsplus
- ultralytics
- yolov8
- yolo
- vision
- object-detection
- pytorch
library_name: ultralytics
library_version: 8.0.4
inference: false
model-index:
- name: ultralyticsplus/yolov8s
results:
- task:
type: object-detection
metrics:
- type: precision
value: 0.449
name: mAP
pipeline_tag: object-detection
---
|
MAG1965/pilar-rubio-beautiful-face-model
|
MAG1965
| 2023-08-31T22:51:59Z | 21 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-31T22:47:59Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Pilar-Rubio-beautiful-face-model Dreambooth model trained by MAG1965 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Susara/JapanesePolite
|
Susara
| 2023-08-31T22:49:03Z | 0 | 1 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-08-31T22:49:03Z |
---
license: bigscience-openrail-m
---
|
fapont/lora-trained-xl-colab-gerard
|
fapont
| 2023-08-31T22:45:57Z | 5 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-31T21:47:13Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of gerard atero
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - fapont/lora-trained-xl-colab-gerard
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of gerard atero using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
yaystevek/a2c-PandaReachDense-v3
|
yaystevek
| 2023-08-31T22:31:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-31T22:25:36Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.26 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AbdelrahmanFakhry/finetuned-gpt2-multi-QA-Generation
|
AbdelrahmanFakhry
| 2023-08-31T22:11:08Z | 163 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-29T15:28:03Z |
---
language:
- en
---
# Model Card: Context-to-QA-Generation using GPT-2
# Description
This model is designed to generate questions, answers, hints, and multiple-choice options based on a given input context. It uses a fine-tuned GPT-2 model that has been trained to perform the task of generating questions and related content for a provided context. The model is trained to understand and follow the structure of providing questions, answers, hints, and multiple-choice options.
# Intended Use
This model is intended to be used for generating questions, answers, hints, and multiple-choice options based on a given context. It can be used for educational purposes, exam preparation, content creation, and other applications where automatic question generation is needed.
# Limitations
The quality of generated questions, answers, and hints depends on the quality and complexity of the input context. Simpler contexts are more likely to yield accurate and coherent outputs.
The model may sometimes generate incorrect or nonsensical content, especially when the input context is complex or ambiguous.
The model's output may be influenced by biases present in the training data, potentially leading to biased or inappropriate content generation.
```python
#!pip install transformers
from transformers import AutoTokenizer, GPT2LMHeadModel
checkpoint = "AbdelrahmanFakhry/finetuned-gpt2-multi-QA-Generation"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = GPT2LMHeadModel.from_pretrained(checkpoint)
# Retrieve a test question from the test dataset
#test_text = test_dataset.to_dict()['question'][3]
# test_text should be like that
test_text = '''Below is input text, the task is to generate questions from input text and multiple answers for
each question and provide hint and correct answer for each question.\n\n### Input:\n<hl> Local intercellular
communication is the province of the paracrine , also called a paracrine factor , which is a chemical that
induces a response in neighboring cells . <hl> Although paracrines may enter the bloodstream , their concentration
is generally too low to elicit a response from distant tissues . A familiar example to those with asthma is histamine ,
a paracrine that is released by immune cells in the bronchial tree . Histamine causes the smooth muscle cells of the bronchi
to constrict , narrowing the airways . Another example is the neurotransmitters of the nervous system , which act only locally
within the synaptic cleft .\n\n### Response: '''
def inference(text, model, tokenizer, max_input_tokens=3000, max_output_tokens=500):
"""
Generate text continuation based on the given input text using a pretrained model.
Args:
text (str): The input text for which to generate a continuation.
model (PreTrainedModel): The pretrained model to use for text generation.
tokenizer (PreTrainedTokenizer): The tokenizer used to preprocess the input and decode the output.
max_input_tokens (int): Maximum number of tokens allowed for the input text.
max_output_tokens (int): Maximum number of tokens in the generated output.
Returns:
generated_text_answer (str): The generated text continuation.
"""
# Tokenize the input text
input_ids = tokenizer.encode(
text,
return_tensors="pt",
truncation=True,
max_length=max_input_tokens
)
# Generate text continuation
device = model.device
generated_tokens_with_prompt = model.generate(
input_ids=input_ids.to(device),
max_length=max_output_tokens
)
# Decode the generated tokens into text
generated_text_with_prompt = tokenizer.batch_decode(generated_tokens_with_prompt, skip_special_tokens=True)
# Extract the generated text continuation without the input prompt
generated_text_answer = generated_text_with_prompt[0][len(text):]
generated_text_answer = generated_text_answer.lstrip(" '][{").rstrip(" '][{}")
return generated_text_answer
generated_answer = inference(test_text, model, tokenizer)
#Generated Answer should be look like that:
'''
"Choices': ['paracrine factor', 'paracrine factor', 'paracrine factor II', 'paracrine factor III'],
'Question': 'Which of the following is not a paracrine factor?',
'answer': 'paracrine factor II',
'hint': 'Local intercellular communication is the province of the paracrine, also called a paracrine factor,
which is a chemical that induces a response in neighboring cells."
'''
print('Generated Answer:')
print(generated_answer)
```
# Acknowledgments
This model is built upon the GPT-2 architecture and fine-tuned using a custom dataset for the specific task of generating questions, answers, hints, and choices.
# Disclaimer
This model's performance may vary depending on the input context and task requirements. It is recommended to review and edit the generated content before using it in critical applications. The model's limitations and biases should also be considered when interpreting its outputs.
|
shivankarzz/me
|
shivankarzz
| 2023-08-31T22:06:16Z | 2 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-08-31T20:16:53Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a shivankar
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
guidoivetta/bert-base-spanish-wwm-cased-finetuned-wine-reviews_spanish
|
guidoivetta
| 2023-08-31T21:46:54Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-cased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-08-31T21:34:22Z |
---
base_model: dccuchile/bert-base-spanish-wwm-cased
tags:
- generated_from_trainer
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-wine-reviews_spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-wine-reviews_spanish
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6168
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3631 | 1.0 | 248 | 1.8869 |
| 1.9213 | 2.0 | 496 | 1.7766 |
| 1.8088 | 3.0 | 744 | 1.6643 |
| 1.7509 | 4.0 | 992 | 1.6665 |
| 1.7232 | 5.0 | 1240 | 1.6300 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
franziskaM/b24-wav2vec2-large-xls-r-romansh-colab
|
franziskaM
| 2023-08-31T21:45:00Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-31T11:01:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
metrics:
- wer
model-index:
- name: b24-wav2vec2-large-xls-r-romansh-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_13_0
type: common_voice_13_0
config: rm-vallader
split: test
args: rm-vallader
metrics:
- name: Wer
type: wer
value: 0.2624592454587797
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b24-wav2vec2-large-xls-r-romansh-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3401
- Wer: 0.2625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.4471 | 0.76 | 100 | 3.3151 | 1.0 |
| 3.0392 | 1.52 | 200 | 3.0118 | 1.0 |
| 2.9633 | 2.29 | 300 | 3.0023 | 1.0 |
| 2.9643 | 3.05 | 400 | 2.9365 | 1.0 |
| 2.9381 | 3.81 | 500 | 2.9319 | 1.0 |
| 2.9411 | 4.58 | 600 | 2.9264 | 1.0 |
| 2.9407 | 5.34 | 700 | 2.9141 | 1.0 |
| 2.9027 | 6.11 | 800 | 2.8848 | 1.0 |
| 2.8833 | 6.87 | 900 | 2.8796 | 0.9988 |
| 2.8805 | 7.63 | 1000 | 2.8679 | 0.9956 |
| 2.7051 | 8.4 | 1100 | 1.8944 | 1.0 |
| 1.343 | 9.16 | 1200 | 0.7785 | 0.6970 |
| 0.8156 | 9.92 | 1300 | 0.5659 | 0.5824 |
| 0.591 | 10.68 | 1400 | 0.4982 | 0.5163 |
| 0.488 | 11.45 | 1500 | 0.4421 | 0.4299 |
| 0.4056 | 12.21 | 1600 | 0.3927 | 0.3959 |
| 0.3488 | 12.97 | 1700 | 0.4095 | 0.3910 |
| 0.2977 | 13.74 | 1800 | 0.3833 | 0.3687 |
| 0.273 | 14.5 | 1900 | 0.3690 | 0.3388 |
| 0.2601 | 15.27 | 2000 | 0.3505 | 0.3121 |
| 0.2258 | 16.03 | 2100 | 0.3577 | 0.3121 |
| 0.2122 | 16.79 | 2200 | 0.3467 | 0.3018 |
| 0.2095 | 17.56 | 2300 | 0.3361 | 0.2951 |
| 0.1719 | 18.32 | 2400 | 0.3572 | 0.2948 |
| 0.1722 | 19.08 | 2500 | 0.3380 | 0.2857 |
| 0.1634 | 19.84 | 2600 | 0.3516 | 0.2883 |
| 0.1592 | 20.61 | 2700 | 0.3374 | 0.2846 |
| 0.153 | 21.37 | 2800 | 0.3395 | 0.2783 |
| 0.1479 | 22.14 | 2900 | 0.3336 | 0.2729 |
| 0.1443 | 22.9 | 3000 | 0.3234 | 0.2669 |
| 0.1339 | 23.66 | 3100 | 0.3345 | 0.2664 |
| 0.1149 | 24.43 | 3200 | 0.3369 | 0.2664 |
| 0.1205 | 25.19 | 3300 | 0.3470 | 0.2660 |
| 0.1251 | 25.95 | 3400 | 0.3319 | 0.2629 |
| 0.1201 | 26.71 | 3500 | 0.3381 | 0.2667 |
| 0.1107 | 27.48 | 3600 | 0.3538 | 0.2655 |
| 0.1117 | 28.24 | 3700 | 0.3423 | 0.2625 |
| 0.1104 | 29.01 | 3800 | 0.3398 | 0.2608 |
| 0.104 | 29.77 | 3900 | 0.3401 | 0.2625 |
### Framework versions
- Transformers 4.26.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
AndrzejDD/lora-trained-xl
|
AndrzejDD
| 2023-08-31T21:39:17Z | 2 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-07-27T09:32:40Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - AndrzejDD/lora-trained-xl
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
EmirhanExecute/CartPole-try2
|
EmirhanExecute
| 2023-08-31T21:26:41Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-31T21:26:33Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-try2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
EmirhanExecute/Pixelcopter-PLE-try2
|
EmirhanExecute
| 2023-08-31T21:25:48Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-31T21:25:39Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-try2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
vaiibhavgupta/finetuned-bleurt-large
|
vaiibhavgupta
| 2023-08-31T21:23:37Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"Automated Answer Scoring",
"en",
"dataset:MultiRC",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T15:25:12Z |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
- f1
library_name: transformers
pipeline_tag: text-classification
datasets:
- MultiRC
tags:
- Automated Answer Scoring
---
|
databio/r2v-ChIP-atlas-hg38
|
databio
| 2023-08-31T21:16:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-01T01:05:08Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Region2Vec ChIP-atlas hg38
## Model Details
### Model Description
This is a region2vec model trained on the hg38 ChIP-atlas ATAC-seq data
- **Developed by:** Nathan LeRoy
- **Model type:** Region2Vec
- **Language(s) (NLP):** hg38
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/databio/geniml
- **Paper [optional]:** https://academic.oup.com/bioinformatics/article/37/23/4299/6307720
## Uses
This model can be used to generate embeddings of genomic regions or region sets. Once embeddings are obtained, they can be directly used for clustering, classification, or search and retrieval tasks. It is limited to hg38. It is not recommended to use this model for data outside ATAC-seq.
## How to Get Started with the Model
You can download and start encoding new genomic region data using the following code:
```python
from gitk.region2vec import Region2VecExModel
model = Region2VecExModel("databio/r2v-ChIP-atlas")
embeddings = model.encode("path/to/file.bed")
print(embeddings.shape)
```
[More Information Needed]
## Training Details
### Training Data
TODO
|
xoumyax/yaragen1-xoumyax
|
xoumyax
| 2023-08-31T21:11:38Z | 151 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-08T00:17:56Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: yaragen1-xoumyax
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yaragen1-xoumyax
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
yethmasoo/distilroberta-base-finetuned-wikitext2
|
yethmasoo
| 2023-08-31T20:44:17Z | 194 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-08-31T20:35:29Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0845 | 1.0 | 2406 | 1.9272 |
| 1.9948 | 2.0 | 4812 | 1.8685 |
| 1.9425 | 3.0 | 7218 | 1.8560 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.2
|
yethmasoo/distilgpt2-finetuned-wikitext2
|
yethmasoo
| 2023-08-31T20:32:28Z | 259 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-31T19:55:15Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7558 | 1.0 | 2334 | 3.6652 |
| 3.6404 | 2.0 | 4668 | 3.6465 |
| 3.5918 | 3.0 | 7002 | 3.6412 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.2
|
dkqjrm/20230901000318
|
dkqjrm
| 2023-08-31T20:27:32Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T15:03:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230901000318'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230901000318
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1391
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 340 | 0.1391 | 0.5 |
| 0.1491 | 2.0 | 680 | 0.1390 | 0.5 |
| 0.1436 | 3.0 | 1020 | 0.1392 | 0.5 |
| 0.1436 | 4.0 | 1360 | 0.1396 | 0.5 |
| 0.1421 | 5.0 | 1700 | 0.1444 | 0.5 |
| 0.1411 | 6.0 | 2040 | 0.1388 | 0.5 |
| 0.1411 | 7.0 | 2380 | 0.1390 | 0.5 |
| 0.142 | 8.0 | 2720 | 0.1388 | 0.5 |
| 0.1402 | 9.0 | 3060 | 0.1392 | 0.5 |
| 0.1402 | 10.0 | 3400 | 0.1396 | 0.5 |
| 0.1414 | 11.0 | 3740 | 0.1389 | 0.5 |
| 0.141 | 12.0 | 4080 | 0.1390 | 0.5 |
| 0.141 | 13.0 | 4420 | 0.1396 | 0.5 |
| 0.1407 | 14.0 | 4760 | 0.1421 | 0.5 |
| 0.1425 | 15.0 | 5100 | 0.1411 | 0.5 |
| 0.1425 | 16.0 | 5440 | 0.1397 | 0.5 |
| 0.1417 | 17.0 | 5780 | 0.1388 | 0.5 |
| 0.1393 | 18.0 | 6120 | 0.1397 | 0.5 |
| 0.1393 | 19.0 | 6460 | 0.1409 | 0.5 |
| 0.1406 | 20.0 | 6800 | 0.1389 | 0.5 |
| 0.1404 | 21.0 | 7140 | 0.1391 | 0.5 |
| 0.1404 | 22.0 | 7480 | 0.1404 | 0.5 |
| 0.1406 | 23.0 | 7820 | 0.1398 | 0.5 |
| 0.1399 | 24.0 | 8160 | 0.1389 | 0.5 |
| 0.1411 | 25.0 | 8500 | 0.1388 | 0.5 |
| 0.1411 | 26.0 | 8840 | 0.1398 | 0.5 |
| 0.1405 | 27.0 | 9180 | 0.1388 | 0.5 |
| 0.1399 | 28.0 | 9520 | 0.1398 | 0.5 |
| 0.1399 | 29.0 | 9860 | 0.1421 | 0.5 |
| 0.1406 | 30.0 | 10200 | 0.1407 | 0.5 |
| 0.14 | 31.0 | 10540 | 0.1388 | 0.5 |
| 0.14 | 32.0 | 10880 | 0.1408 | 0.5 |
| 0.1402 | 33.0 | 11220 | 0.1402 | 0.5 |
| 0.1418 | 34.0 | 11560 | 0.1386 | 0.5 |
| 0.1418 | 35.0 | 11900 | 0.1385 | 0.5 |
| 0.139 | 36.0 | 12240 | 0.1374 | 0.5 |
| 0.1371 | 37.0 | 12580 | 0.1408 | 0.5 |
| 0.1371 | 38.0 | 12920 | 0.1427 | 0.5 |
| 0.1353 | 39.0 | 13260 | 0.1379 | 0.5 |
| 0.1346 | 40.0 | 13600 | 0.1398 | 0.5 |
| 0.1346 | 41.0 | 13940 | 0.1412 | 0.5 |
| 0.1343 | 42.0 | 14280 | 0.1373 | 0.5 |
| 0.1329 | 43.0 | 14620 | 0.1386 | 0.5 |
| 0.1329 | 44.0 | 14960 | 0.1374 | 0.5 |
| 0.1335 | 45.0 | 15300 | 0.1387 | 0.5 |
| 0.1319 | 46.0 | 15640 | 0.1366 | 0.5 |
| 0.1319 | 47.0 | 15980 | 0.1371 | 0.5 |
| 0.1326 | 48.0 | 16320 | 0.1395 | 0.5 |
| 0.1313 | 49.0 | 16660 | 0.1379 | 0.5 |
| 0.131 | 50.0 | 17000 | 0.1401 | 0.5 |
| 0.131 | 51.0 | 17340 | 0.1417 | 0.5 |
| 0.1302 | 52.0 | 17680 | 0.1390 | 0.5 |
| 0.1313 | 53.0 | 18020 | 0.1367 | 0.5 |
| 0.1313 | 54.0 | 18360 | 0.1392 | 0.5 |
| 0.13 | 55.0 | 18700 | 0.1381 | 0.5 |
| 0.1299 | 56.0 | 19040 | 0.1397 | 0.5 |
| 0.1299 | 57.0 | 19380 | 0.1381 | 0.5 |
| 0.1293 | 58.0 | 19720 | 0.1376 | 0.5 |
| 0.13 | 59.0 | 20060 | 0.1376 | 0.5 |
| 0.13 | 60.0 | 20400 | 0.1395 | 0.5 |
| 0.1291 | 61.0 | 20740 | 0.1385 | 0.5 |
| 0.129 | 62.0 | 21080 | 0.1385 | 0.5 |
| 0.129 | 63.0 | 21420 | 0.1377 | 0.5 |
| 0.1282 | 64.0 | 21760 | 0.1390 | 0.5 |
| 0.1297 | 65.0 | 22100 | 0.1389 | 0.5 |
| 0.1297 | 66.0 | 22440 | 0.1369 | 0.5 |
| 0.1267 | 67.0 | 22780 | 0.1395 | 0.5 |
| 0.129 | 68.0 | 23120 | 0.1403 | 0.5 |
| 0.129 | 69.0 | 23460 | 0.1390 | 0.5 |
| 0.1282 | 70.0 | 23800 | 0.1393 | 0.5 |
| 0.1277 | 71.0 | 24140 | 0.1396 | 0.5 |
| 0.1277 | 72.0 | 24480 | 0.1391 | 0.5 |
| 0.1273 | 73.0 | 24820 | 0.1389 | 0.5 |
| 0.1279 | 74.0 | 25160 | 0.1398 | 0.5 |
| 0.1272 | 75.0 | 25500 | 0.1393 | 0.5 |
| 0.1272 | 76.0 | 25840 | 0.1392 | 0.5 |
| 0.1277 | 77.0 | 26180 | 0.1397 | 0.5 |
| 0.1271 | 78.0 | 26520 | 0.1386 | 0.5 |
| 0.1271 | 79.0 | 26860 | 0.1394 | 0.5 |
| 0.127 | 80.0 | 27200 | 0.1391 | 0.5 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mindadeepam/setfit-hing-mbert-mixed-st-multiclass
|
mindadeepam
| 2023-08-31T20:09:43Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-31T20:09:14Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# mindadeepam/setfit-hing-mbert-mixed-st-multiclass
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("mindadeepam/setfit-hing-mbert-mixed-st-multiclass")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
tkoyama/distilbert-base-uncased-finetuned-imdb
|
tkoyama
| 2023-08-31T20:07:54Z | 99 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-08-31T20:04:26Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7024 | 1.0 | 157 | 2.4968 |
| 2.5794 | 2.0 | 314 | 2.4281 |
| 2.5354 | 3.0 | 471 | 2.4509 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
yaystevek/ppo-PyramidsTraining
|
yaystevek
| 2023-08-31T19:55:45Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-08-31T19:55:36Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: yaystevek/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_backtranslation
|
ThuyNT03
| 2023-08-31T19:54:15Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T19:49:29Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_backtranslation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_backtranslation
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2392
- Accuracy: 0.71
- F1: 0.7021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0111 | 1.0 | 87 | 0.8146 | 0.64 | 0.5888 |
| 0.7211 | 2.0 | 174 | 0.7209 | 0.74 | 0.7347 |
| 0.5231 | 3.0 | 261 | 0.8348 | 0.7 | 0.6778 |
| 0.3879 | 4.0 | 348 | 0.7918 | 0.75 | 0.7462 |
| 0.3063 | 5.0 | 435 | 0.9875 | 0.7 | 0.6906 |
| 0.2411 | 6.0 | 522 | 1.1185 | 0.72 | 0.7144 |
| 0.2316 | 7.0 | 609 | 1.1889 | 0.69 | 0.6845 |
| 0.1868 | 8.0 | 696 | 1.2392 | 0.71 | 0.7021 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
davera-017/ppo-rnd-Pyramids
|
davera-017
| 2023-08-31T19:46:18Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-08-31T19:46:11Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: davera-017/ppo-rnd-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
tgoktug/my_bert_classification_model
|
tgoktug
| 2023-08-31T19:32:12Z | 53 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T19:09:00Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: tgoktug/my_bert_classification_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tgoktug/my_bert_classification_model
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5157
- Validation Loss: 1.1278
- Train Accuracy: 0.5596
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.0725 | 0.8868 | 0.6163 | 0 |
| 0.8197 | 0.8660 | 0.6323 | 1 |
| 0.7354 | 0.8494 | 0.6046 | 2 |
| 0.6645 | 0.9047 | 0.6080 | 3 |
| 0.6108 | 0.9597 | 0.5914 | 4 |
| 0.5581 | 1.0378 | 0.5713 | 5 |
| 0.5157 | 1.1278 | 0.5596 | 6 |
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ThuyNT03/PhoBERT-Final_Mixed-aug_backtranslation
|
ThuyNT03
| 2023-08-31T19:22:33Z | 93 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T19:16:18Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_backtranslation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_backtranslation
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1690
- Accuracy: 0.69
- F1: 0.6841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8783 | 1.0 | 87 | 0.7582 | 0.71 | 0.7013 |
| 0.5891 | 2.0 | 174 | 0.7106 | 0.7 | 0.6957 |
| 0.4547 | 3.0 | 261 | 0.8682 | 0.68 | 0.6639 |
| 0.3314 | 4.0 | 348 | 0.9565 | 0.69 | 0.6772 |
| 0.2432 | 5.0 | 435 | 1.0495 | 0.69 | 0.6841 |
| 0.1795 | 6.0 | 522 | 1.1215 | 0.67 | 0.6619 |
| 0.1465 | 7.0 | 609 | 1.1350 | 0.67 | 0.6669 |
| 0.1116 | 8.0 | 696 | 1.1690 | 0.69 | 0.6841 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
SlyEcho/open_llama_3b_gguf
|
SlyEcho
| 2023-08-31T19:18:53Z | 26 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-31T16:39:53Z |
---
license: apache-2.0
---
# gguf versions of OpenLLaMa 3B
- Version: 1T tokens final version
- Project: [OpenLLaMA: An Open Reproduction of LLaMA](https://github.com/openlm-research/open_llama)
- Model: [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b)
- [llama.cpp](https://github.com/ggerganov/llama.cpp): build 1012 (6381d4e) or later
- [ggml version](https://huggingface.co/SlyEcho/open_llama_3b_ggml)
## Newer quantizations
There are now more quantization types in llama.cpp, some lower than 4 bits.
Currently these are not supported, maybe because some weights have shapes that don't divide by 256.
## Perplexity on wiki.test.406
Coming soon...
|
ThuyNT03/PhoBERT-Final_Mixed-aug_replace_BERT
|
ThuyNT03
| 2023-08-31T19:16:14Z | 88 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T19:09:54Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_replace_BERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_replace_BERT
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8746
- Accuracy: 0.74
- F1: 0.7344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.972 | 1.0 | 88 | 0.7868 | 0.62 | 0.5610 |
| 0.7723 | 2.0 | 176 | 0.7641 | 0.7 | 0.6953 |
| 0.6104 | 3.0 | 264 | 0.7508 | 0.7 | 0.6967 |
| 0.5009 | 4.0 | 352 | 0.7608 | 0.68 | 0.6727 |
| 0.377 | 5.0 | 440 | 0.7301 | 0.72 | 0.7217 |
| 0.3016 | 6.0 | 528 | 0.8430 | 0.73 | 0.7241 |
| 0.2305 | 7.0 | 616 | 0.8625 | 0.74 | 0.7346 |
| 0.2054 | 8.0 | 704 | 0.8746 | 0.74 | 0.7344 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_replace_synonym
|
ThuyNT03
| 2023-08-31T19:11:58Z | 95 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T19:05:35Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_replace_synonym
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_replace_synonym
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9898
- Accuracy: 0.72
- F1: 0.7178
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0771 | 1.0 | 87 | 1.0042 | 0.51 | 0.4221 |
| 0.9004 | 2.0 | 174 | 0.8167 | 0.66 | 0.6429 |
| 0.6997 | 3.0 | 261 | 0.6418 | 0.73 | 0.7224 |
| 0.5266 | 4.0 | 348 | 0.6756 | 0.78 | 0.7755 |
| 0.3977 | 5.0 | 435 | 0.7815 | 0.75 | 0.7405 |
| 0.3194 | 6.0 | 522 | 0.8890 | 0.73 | 0.7216 |
| 0.2557 | 7.0 | 609 | 0.9661 | 0.72 | 0.7142 |
| 0.1829 | 8.0 | 696 | 0.9898 | 0.72 | 0.7178 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Infernaught/loratest_v2
|
Infernaught
| 2023-08-31T19:11:39Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-31T19:11:13Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
yaystevek/ppo-SnowballTarget
|
yaystevek
| 2023-08-31T19:11:01Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-31T19:10:57Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: yaystevek/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_insert_BERT
|
ThuyNT03
| 2023-08-31T19:05:24Z | 88 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T19:00:30Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_insert_BERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_insert_BERT
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9456
- Accuracy: 0.74
- F1: 0.7341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9972 | 1.0 | 88 | 0.7311 | 0.65 | 0.6359 |
| 0.7329 | 2.0 | 176 | 0.7124 | 0.73 | 0.7240 |
| 0.577 | 3.0 | 264 | 0.6674 | 0.76 | 0.7554 |
| 0.4601 | 4.0 | 352 | 0.6343 | 0.74 | 0.7385 |
| 0.3659 | 5.0 | 440 | 0.7159 | 0.75 | 0.7442 |
| 0.2772 | 6.0 | 528 | 0.8521 | 0.76 | 0.7503 |
| 0.1968 | 7.0 | 616 | 0.8760 | 0.75 | 0.7437 |
| 0.1852 | 8.0 | 704 | 0.9456 | 0.74 | 0.7341 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Shumit/MARY-Falcon-7B
|
Shumit
| 2023-08-31T19:04:49Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-31T19:03:43Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
ThuyNT03/PhoBERT-Final_Mixed-aug_replace_w2v
|
ThuyNT03
| 2023-08-31T19:03:12Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T18:56:19Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_replace_w2v
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_replace_w2v
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9771
- Accuracy: 0.73
- F1: 0.7251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9504 | 1.0 | 86 | 0.7392 | 0.65 | 0.6205 |
| 0.6517 | 2.0 | 172 | 0.7087 | 0.69 | 0.6783 |
| 0.4998 | 3.0 | 258 | 0.7396 | 0.69 | 0.6788 |
| 0.3663 | 4.0 | 344 | 0.7976 | 0.69 | 0.6714 |
| 0.2623 | 5.0 | 430 | 0.8181 | 0.72 | 0.7177 |
| 0.1751 | 6.0 | 516 | 0.8604 | 0.75 | 0.7498 |
| 0.1446 | 7.0 | 602 | 0.9600 | 0.72 | 0.7135 |
| 0.1061 | 8.0 | 688 | 0.9771 | 0.73 | 0.7251 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ThuyNT03/PhoBERT-Final_Mixed-aug_replace_synonym
|
ThuyNT03
| 2023-08-31T18:56:12Z | 93 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T18:47:33Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_replace_synonym
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_replace_synonym
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1766
- Accuracy: 0.69
- F1: 0.6889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9621 | 1.0 | 87 | 0.8397 | 0.65 | 0.6348 |
| 0.6681 | 2.0 | 174 | 0.7618 | 0.69 | 0.6716 |
| 0.4669 | 3.0 | 261 | 0.7850 | 0.7 | 0.6983 |
| 0.3237 | 4.0 | 348 | 0.8321 | 0.71 | 0.7086 |
| 0.2253 | 5.0 | 435 | 0.9725 | 0.71 | 0.7097 |
| 0.1713 | 6.0 | 522 | 1.0872 | 0.69 | 0.6842 |
| 0.1195 | 7.0 | 609 | 1.1901 | 0.7 | 0.6974 |
| 0.092 | 8.0 | 696 | 1.1766 | 0.69 | 0.6889 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ahirtonlopes/wav2vec2-base-finetuned-ks
|
ahirtonlopes
| 2023-08-31T18:45:06Z | 137 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-31T16:15:19Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: superb
type: superb
config: ks
split: validation
args: ks
metrics:
- name: Accuracy
type: accuracy
value: 0.9010002942041777
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5155
- Accuracy: 0.9010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4701 | 1.0 | 100 | 1.4821 | 0.6209 |
| 1.1434 | 2.0 | 200 | 1.0657 | 0.6649 |
| 0.8086 | 3.0 | 300 | 0.7112 | 0.8320 |
| 0.659 | 4.0 | 400 | 0.5686 | 0.8789 |
| 0.5987 | 5.0 | 500 | 0.5155 | 0.9010 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ThuyNT03/PhoBERT-Final_Mixed-aug_insert_tfidf
|
ThuyNT03
| 2023-08-31T18:40:47Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T18:33:51Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_insert_tfidf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_insert_tfidf
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2673
- Accuracy: 0.73
- F1: 0.7262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.894 | 1.0 | 88 | 0.7277 | 0.66 | 0.6187 |
| 0.5738 | 2.0 | 176 | 0.7561 | 0.7 | 0.6957 |
| 0.3647 | 3.0 | 264 | 0.8054 | 0.72 | 0.7149 |
| 0.2496 | 4.0 | 352 | 1.0288 | 0.69 | 0.6842 |
| 0.1633 | 5.0 | 440 | 1.1435 | 0.7 | 0.6943 |
| 0.1162 | 6.0 | 528 | 1.1985 | 0.72 | 0.7157 |
| 0.0909 | 7.0 | 616 | 1.2491 | 0.73 | 0.7262 |
| 0.0722 | 8.0 | 704 | 1.2673 | 0.73 | 0.7262 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_insert_synonym
|
ThuyNT03
| 2023-08-31T18:39:35Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T18:33:32Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_insert_synonym
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_insert_synonym
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4863
- Accuracy: 0.7
- F1: 0.6922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0118 | 1.0 | 88 | 0.7571 | 0.64 | 0.5664 |
| 0.7524 | 2.0 | 176 | 0.7290 | 0.71 | 0.6945 |
| 0.5666 | 3.0 | 264 | 0.7499 | 0.79 | 0.7873 |
| 0.406 | 4.0 | 352 | 0.8793 | 0.69 | 0.6840 |
| 0.2826 | 5.0 | 440 | 1.0454 | 0.72 | 0.7043 |
| 0.2261 | 6.0 | 528 | 1.1631 | 0.73 | 0.7196 |
| 0.1374 | 7.0 | 616 | 1.4514 | 0.7 | 0.6944 |
| 0.1337 | 8.0 | 704 | 1.4863 | 0.7 | 0.6922 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Campqt/Reinforce-Pixelcopter-PLE-v0
|
Campqt
| 2023-08-31T18:38:21Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-31T17:06:20Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 45.90 +/- 36.64
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ThuyNT03/PhoBERT-Final_Mixed-aug_insert_w2v
|
ThuyNT03
| 2023-08-31T18:33:47Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T18:27:34Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_insert_w2v
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_insert_w2v
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1056
- Accuracy: 0.73
- F1: 0.7280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8961 | 1.0 | 86 | 0.7149 | 0.69 | 0.6676 |
| 0.5695 | 2.0 | 172 | 0.7188 | 0.71 | 0.7029 |
| 0.3772 | 3.0 | 258 | 0.7802 | 0.71 | 0.7061 |
| 0.2899 | 4.0 | 344 | 0.7639 | 0.76 | 0.7595 |
| 0.2145 | 5.0 | 430 | 0.9140 | 0.73 | 0.7286 |
| 0.1299 | 6.0 | 516 | 1.0655 | 0.72 | 0.7123 |
| 0.1047 | 7.0 | 602 | 1.0912 | 0.73 | 0.7244 |
| 0.0864 | 8.0 | 688 | 1.1056 | 0.73 | 0.7280 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_delete
|
ThuyNT03
| 2023-08-31T18:33:28Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T18:28:49Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_delete
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_delete
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8666
- Accuracy: 0.72
- F1: 0.7141
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0782 | 1.0 | 88 | 0.9523 | 0.61 | 0.4982 |
| 0.8614 | 2.0 | 176 | 0.7754 | 0.72 | 0.7098 |
| 0.6895 | 3.0 | 264 | 0.6680 | 0.78 | 0.7790 |
| 0.5463 | 4.0 | 352 | 0.6805 | 0.76 | 0.7575 |
| 0.4314 | 5.0 | 440 | 0.7151 | 0.73 | 0.7247 |
| 0.3251 | 6.0 | 528 | 0.7835 | 0.71 | 0.7025 |
| 0.2719 | 7.0 | 616 | 0.8466 | 0.73 | 0.7260 |
| 0.2233 | 8.0 | 704 | 0.8666 | 0.72 | 0.7141 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ThuyNT03/PhoBERT-Final_Mixed-aug_insert_synonym
|
ThuyNT03
| 2023-08-31T18:27:27Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T18:18:48Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-Final_Mixed-aug_insert_synonym
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-Final_Mixed-aug_insert_synonym
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2173
- Accuracy: 0.72
- F1: 0.7208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8905 | 1.0 | 88 | 0.7446 | 0.67 | 0.6565 |
| 0.6277 | 2.0 | 176 | 0.7272 | 0.72 | 0.7163 |
| 0.4506 | 3.0 | 264 | 0.7680 | 0.7 | 0.6957 |
| 0.3032 | 4.0 | 352 | 0.8468 | 0.73 | 0.7307 |
| 0.2046 | 5.0 | 440 | 0.9995 | 0.7 | 0.7011 |
| 0.1487 | 6.0 | 528 | 1.1278 | 0.72 | 0.7157 |
| 0.1103 | 7.0 | 616 | 1.1927 | 0.71 | 0.7097 |
| 0.0985 | 8.0 | 704 | 1.2173 | 0.72 | 0.7208 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
akashmaggon/pythia-70m
|
akashmaggon
| 2023-08-31T18:22:03Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:finetune:EleutherAI/pythia-70m",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-31T18:13:28Z |
---
license: apache-2.0
base_model: EleutherAI/pythia-70m
tags:
- generated_from_trainer
model-index:
- name: pythia-70m
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia-70m
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5049 | 1.0 | 79 | 2.3376 |
| 2.0867 | 2.0 | 158 | 2.2416 |
| 1.9168 | 3.0 | 237 | 2.1985 |
| 1.8108 | 4.0 | 316 | 2.1851 |
| 1.7446 | 5.0 | 395 | 2.1844 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
khhuang/zerofec-daqa-t5-base
|
khhuang
| 2023-08-31T18:20:09Z | 107 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-13T20:29:47Z |
---
language: en
widget:
- text: What is Night of the Living Dead? \n Night of the Living Dead is a 1968 American independent horror film , directed by George A. Romero , starring Duane Jones and Judith O'Dea . George A. Romero George A. Romero Duane Jones Duane Jones Judith O'Dea Judith O'Dea independent Independent film horror film horror film.
---
# Domain-adapted QA Model From ZeroFEC
ZeroFEC is a faithful and interpetable factual error correction framework introduced in the paper [Zero-shot Faithful Factual Error Correction](https://aclanthology.org/2023.acl-long.311/). It involves a QA component, which is a UnifiedQA model continue fine-tuned on two additional biomedical QA datasets. The associated code is released in [this](https://github.com/khuangaf/ZeroFEC) repository.
### How to use
Using Huggingface pipeline abstraction:
```python
from transformers import pipeline
nlp = pipeline("text2text-generation", model='khhuang/zerofec-daqa-t5-base', tokenizer='khhuang/zerofec-daqa-t5-base')
QUESTION = "What is Night of the Living Dead?"
CONTEXT = "Night of the Living Dead is a 1968 American independent horror film , directed by George A."
def format_inputs(context: str, question: str):
return f"{question} \n {context}"
text = format_inputs(CONTEXT, QUESTION)
nlp(text)
# should output [{'generated_text': 'a 1968 american independent horror film'}]
```
Using the pre-trained model directly:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained('khhuang/zerofec-daqa-t5-base')
model = AutoModelForSeq2SeqLM.from_pretrained('khhuang/zerofec-daqa-t5-base')
QUESTION = "What is Night of the Living Dead?"
CONTEXT = "Night of the Living Dead is a 1968 American independent horror film , directed by George A."
def format_inputs(context: str, question: str):
return f"{question} \n {context}"
text = format_inputs(CONTEXT, QUESTION)
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=32, num_beams=4)
output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(output)
# should output "a 1968 american independent horror film"
```
### Citation
```
@inproceedings{huang-etal-2023-zero,
title = "Zero-shot Faithful Factual Error Correction",
author = "Huang, Kung-Hsiang and
Chan, Hou Pong and
Ji, Heng",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.311",
doi = "10.18653/v1/2023.acl-long.311",
pages = "5660--5676",
}
```
|
khhuang/zerofec-qa2claim-t5-base
|
khhuang
| 2023-08-31T18:16:21Z | 1,589 | 4 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-13T19:43:58Z |
---
language: en
widget:
- text: a 1968 american independent horror film \\n What is Night of the Living Dead?
---
# QA2Claim Model From ZeroFEC
ZeroFEC is a faithful and interpetable factual error correction framework introduced in the paper [Zero-shot Faithful Factual Error Correction](https://aclanthology.org/2023.acl-long.311/). It involves a component that converts qa-pairs to declarative statements, which is hosted in this repo. The associated code is released in [this](https://github.com/khuangaf/ZeroFEC) repository.
### How to use
Using Huggingface pipeline abstraction:
```python
from transformers import pipeline
nlp = pipeline("text2text-generation", model='khhuang/zerofec-qa2claim-t5-base', tokenizer='khhuang/zerofec-qa2claim-t5-base')
QUESTION = "What is Night of the Living Dead?"
ANSWER = "a 1968 american independent horror film"
def format_inputs(question: str, answer: str):
return f"{answer} \\n {question}"
text = format_inputs(QUESTION, ANSWER)
nlp(text)
# should output [{'generated_text': 'Night of the Living Dead is a 1968 american independent horror film.'}]
```
Using the pre-trained model directly:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained('khhuang/zerofec-qa2claim-t5-base')
model = AutoModelForSeq2SeqLM.from_pretrained('khhuang/zerofec-qa2claim-t5-base')
QUESTION = "What is Night of the Living Dead?"
ANSWER = "a 1968 american independent horror film"
def format_inputs(question: str, answer: str):
return f"{answer} \\n {question}"
text = format_inputs(QUESTION, ANSWER)
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=32, num_beams=4)
output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(output)
# should output "Night of the Living Dead is a 1968 american independent horror film."
```
### Citation
```
@inproceedings{huang-etal-2023-zero,
title = "Zero-shot Faithful Factual Error Correction",
author = "Huang, Kung-Hsiang and
Chan, Hou Pong and
Ji, Heng",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.311",
doi = "10.18653/v1/2023.acl-long.311",
pages = "5660--5676",
}
```
|
arroyadr/wav2vec2-base-finetuned-gtzan
|
arroyadr
| 2023-08-31T18:06:18Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-18T22:43:25Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.85
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-gtzan
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6459
- Accuracy: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0912 | 1.0 | 113 | 1.9840 | 0.33 |
| 1.7391 | 2.0 | 226 | 1.6205 | 0.57 |
| 1.3242 | 3.0 | 339 | 1.3338 | 0.61 |
| 1.1953 | 4.0 | 452 | 1.1904 | 0.68 |
| 0.8983 | 5.0 | 565 | 1.0357 | 0.75 |
| 0.8686 | 6.0 | 678 | 0.9569 | 0.78 |
| 0.84 | 7.0 | 791 | 0.7681 | 0.8 |
| 0.5776 | 8.0 | 904 | 0.6968 | 0.84 |
| 0.5186 | 9.0 | 1017 | 0.6541 | 0.86 |
| 0.3765 | 10.0 | 1130 | 0.6743 | 0.85 |
| 0.3671 | 11.0 | 1243 | 0.6459 | 0.85 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
aviroes/MAScIR_elderly_whisper-medium-LoRA-ev
|
aviroes
| 2023-08-31T18:03:33Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"region:us"
] | null | 2023-08-31T15:47:34Z |
---
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
model-index:
- name: MAScIR_elderly_whisper-medium-LoRA-ev
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MAScIR_elderly_whisper-medium-LoRA-ev
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3194 | 0.19 | 100 | 0.2974 |
| 0.2485 | 0.37 | 200 | 0.2865 |
| 0.2532 | 0.56 | 300 | 0.2810 |
| 0.2306 | 0.74 | 400 | 0.2225 |
| 0.1954 | 0.93 | 500 | 0.2355 |
| 0.1178 | 1.11 | 600 | 0.1883 |
| 0.1087 | 1.3 | 700 | 0.1567 |
| 0.098 | 1.48 | 800 | 0.1593 |
| 0.0661 | 1.67 | 900 | 0.0985 |
| 0.0675 | 1.85 | 1000 | 0.0602 |
| 0.0297 | 2.04 | 1100 | 0.0543 |
| 0.0172 | 2.22 | 1200 | 0.0436 |
| 0.0157 | 2.41 | 1300 | 0.0403 |
| 0.0143 | 2.59 | 1400 | 0.0317 |
| 0.0167 | 2.78 | 1500 | 0.0265 |
| 0.0095 | 2.96 | 1600 | 0.0213 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ThuyNT03/PhoBERT-Final_VietNam-aug_insert_tfidf
|
ThuyNT03
| 2023-08-31T17:36:45Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T17:30:38Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-aug_replace_synonym-aug_insert_tfidf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-aug_replace_synonym-aug_insert_tfidf
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2544
- Accuracy: 0.71
- F1: 0.7177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8536 | 1.0 | 87 | 0.6560 | 0.72 | 0.7124 |
| 0.5144 | 2.0 | 174 | 0.5992 | 0.75 | 0.7574 |
| 0.341 | 3.0 | 261 | 0.7304 | 0.73 | 0.7389 |
| 0.216 | 4.0 | 348 | 1.0216 | 0.68 | 0.6885 |
| 0.178 | 5.0 | 435 | 1.0374 | 0.74 | 0.7506 |
| 0.1178 | 6.0 | 522 | 1.1481 | 0.72 | 0.7316 |
| 0.1049 | 7.0 | 609 | 1.2096 | 0.71 | 0.7177 |
| 0.0864 | 8.0 | 696 | 1.2544 | 0.71 | 0.7177 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ironchanchellor/segformer-b0_DsB
|
ironchanchellor
| 2023-08-31T17:30:39Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"segformer",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-08-31T16:18:57Z |
---
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-metallography_DsB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-metallography_DsB
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0162
- Mean Iou: 0.7889
- Mean Accuracy: 0.9743
- Overall Accuracy: 0.9937
- Accuracy Background: nan
- Accuracy Haz: 0.9934
- Accuracy Matrix: 0.9859
- Accuracy Porosity: 0.9183
- Accuracy Carbides: 0.9759
- Accuracy Substrate: 0.9981
- Iou Background: 0.0
- Iou Haz: 0.9909
- Iou Matrix: 0.9758
- Iou Porosity: 0.8239
- Iou Carbides: 0.9504
- Iou Substrate: 0.9926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Background | Accuracy Haz | Accuracy Matrix | Accuracy Porosity | Accuracy Carbides | Accuracy Substrate | Iou Background | Iou Haz | Iou Matrix | Iou Porosity | Iou Carbides | Iou Substrate |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------:|:------------:|:---------------:|:-----------------:|:-----------------:|:------------------:|:--------------:|:-------:|:----------:|:------------:|:------------:|:-------------:|
| 1.1925 | 1.0 | 350 | 0.2093 | 0.5751 | 0.7355 | 0.9228 | nan | 0.8617 | 0.9887 | 0.0 | 0.8605 | 0.9668 | 0.0 | 0.8289 | 0.9133 | 0.0 | 0.8400 | 0.8683 |
| 0.3065 | 2.0 | 700 | 0.1070 | 0.6106 | 0.7607 | 0.9570 | nan | 0.9158 | 0.9711 | 0.0 | 0.9221 | 0.9945 | 0.0 | 0.9053 | 0.9400 | 0.0 | 0.8907 | 0.9276 |
| 0.1839 | 3.0 | 1050 | 0.0717 | 0.6284 | 0.7777 | 0.9747 | nan | 0.9737 | 0.9668 | 0.0 | 0.9676 | 0.9802 | 0.0 | 0.9488 | 0.9513 | 0.0 | 0.9116 | 0.9590 |
| 1.0057 | 4.0 | 1400 | 0.0470 | 0.6322 | 0.7765 | 0.9783 | nan | 0.9889 | 0.9718 | 0.0 | 0.9460 | 0.9761 | 0.0 | 0.9580 | 0.9493 | 0.0 | 0.9193 | 0.9669 |
| 1.3313 | 5.0 | 1750 | 0.0360 | 0.6338 | 0.7751 | 0.9839 | nan | 0.9825 | 0.9861 | 0.0 | 0.9128 | 0.9940 | 0.0 | 0.9737 | 0.9478 | 0.0 | 0.9015 | 0.9799 |
| 0.1398 | 6.0 | 2100 | 0.0333 | 0.6407 | 0.7849 | 0.9853 | nan | 0.9943 | 0.9782 | 0.0 | 0.9689 | 0.9830 | 0.0 | 0.9743 | 0.9623 | 0.0 | 0.9290 | 0.9787 |
| 0.4763 | 7.0 | 2450 | 0.0941 | 0.6520 | 0.8054 | 0.9710 | nan | 0.9435 | 0.9745 | 0.1384 | 0.9757 | 0.9950 | 0.0 | 0.9367 | 0.9622 | 0.1384 | 0.9258 | 0.9486 |
| 0.074 | 8.0 | 2800 | 0.0373 | 0.7154 | 0.8725 | 0.9848 | nan | 0.9877 | 0.9841 | 0.4466 | 0.9577 | 0.9864 | 0.0 | 0.9711 | 0.9646 | 0.4466 | 0.9339 | 0.9760 |
| 0.0637 | 9.0 | 3150 | 0.0239 | 0.7358 | 0.8946 | 0.9885 | nan | 0.9867 | 0.9907 | 0.5610 | 0.9388 | 0.9956 | 0.0 | 0.9815 | 0.9631 | 0.5591 | 0.9258 | 0.9851 |
| 0.0402 | 10.0 | 3500 | 0.0295 | 0.7462 | 0.9085 | 0.9865 | nan | 0.9774 | 0.9872 | 0.6256 | 0.9541 | 0.9982 | 0.0 | 0.9752 | 0.9662 | 0.6232 | 0.9333 | 0.9796 |
| 1.069 | 11.0 | 3850 | 0.0244 | 0.7494 | 0.9115 | 0.9889 | nan | 0.9874 | 0.9908 | 0.6455 | 0.9383 | 0.9957 | 0.0 | 0.9822 | 0.9644 | 0.6384 | 0.9263 | 0.9854 |
| 0.5997 | 12.0 | 4200 | 0.0243 | 0.7492 | 0.9106 | 0.9893 | nan | 0.9859 | 0.9884 | 0.6271 | 0.9545 | 0.9970 | 0.0 | 0.9817 | 0.9684 | 0.6246 | 0.9356 | 0.9850 |
| 0.091 | 13.0 | 4550 | 0.0269 | 0.7557 | 0.9197 | 0.9886 | nan | 0.9858 | 0.9900 | 0.6747 | 0.9530 | 0.9950 | 0.0 | 0.9799 | 0.9693 | 0.6659 | 0.9361 | 0.9833 |
| 1.3004 | 14.0 | 4900 | 0.0226 | 0.7740 | 0.9448 | 0.9906 | nan | 0.9887 | 0.9859 | 0.7857 | 0.9674 | 0.9964 | 0.0 | 0.9841 | 0.9719 | 0.7585 | 0.9424 | 0.9870 |
| 0.94 | 15.0 | 5250 | 0.1346 | 0.7572 | 0.9315 | 0.9731 | nan | 0.9938 | 0.9862 | 0.7591 | 0.9657 | 0.9528 | 0.0 | 0.9423 | 0.9709 | 0.7399 | 0.9417 | 0.9481 |
| 0.8906 | 16.0 | 5600 | 0.0221 | 0.7781 | 0.9528 | 0.9911 | nan | 0.9886 | 0.9844 | 0.8206 | 0.9729 | 0.9973 | 0.0 | 0.9851 | 0.9724 | 0.7805 | 0.9429 | 0.9877 |
| 0.9739 | 17.0 | 5950 | 0.0233 | 0.7629 | 0.9264 | 0.9905 | nan | 0.9870 | 0.9914 | 0.7040 | 0.9516 | 0.9980 | 0.0 | 0.9845 | 0.9700 | 0.6986 | 0.9367 | 0.9874 |
| 0.417 | 18.0 | 6300 | 0.0200 | 0.7724 | 0.9392 | 0.9917 | nan | 0.9911 | 0.9909 | 0.7618 | 0.9556 | 0.9967 | 0.0 | 0.9869 | 0.9718 | 0.7468 | 0.9399 | 0.9893 |
| 0.0405 | 19.0 | 6650 | 0.1657 | 0.7661 | 0.9474 | 0.9743 | nan | 0.9434 | 0.9863 | 0.8421 | 0.9661 | 0.9991 | 0.0 | 0.9421 | 0.9718 | 0.7877 | 0.9422 | 0.9528 |
| 1.2414 | 20.0 | 7000 | 0.0275 | 0.7808 | 0.9593 | 0.9900 | nan | 0.9844 | 0.9838 | 0.8565 | 0.9733 | 0.9986 | 0.0 | 0.9824 | 0.9725 | 0.8000 | 0.9442 | 0.9855 |
| 0.7539 | 21.0 | 7350 | 0.0200 | 0.7791 | 0.9509 | 0.9918 | nan | 0.9947 | 0.9857 | 0.8106 | 0.9698 | 0.9936 | 0.0 | 0.9872 | 0.9724 | 0.7813 | 0.9445 | 0.9895 |
| 0.0158 | 22.0 | 7700 | 0.0159 | 0.7773 | 0.9468 | 0.9926 | nan | 0.9924 | 0.9854 | 0.7855 | 0.9736 | 0.9972 | 0.0 | 0.9889 | 0.9731 | 0.7657 | 0.9448 | 0.9910 |
| 0.3368 | 23.0 | 8050 | 0.0176 | 0.7849 | 0.9678 | 0.9925 | nan | 0.9962 | 0.9844 | 0.8892 | 0.9758 | 0.9933 | 0.0 | 0.9882 | 0.9739 | 0.8113 | 0.9459 | 0.9904 |
| 0.0526 | 24.0 | 8400 | 0.0168 | 0.7835 | 0.9629 | 0.9927 | nan | 0.9916 | 0.9895 | 0.8727 | 0.9629 | 0.9978 | 0.0 | 0.9888 | 0.9739 | 0.8030 | 0.9448 | 0.9908 |
| 0.9409 | 25.0 | 8750 | 0.0205 | 0.7842 | 0.9681 | 0.9920 | nan | 0.9899 | 0.9829 | 0.8925 | 0.9773 | 0.9980 | 0.0 | 0.9873 | 0.9732 | 0.8096 | 0.9452 | 0.9897 |
| 1.0493 | 26.0 | 9100 | 0.0187 | 0.7823 | 0.9542 | 0.9924 | nan | 0.9906 | 0.9877 | 0.8277 | 0.9670 | 0.9981 | 0.0 | 0.9881 | 0.9736 | 0.7966 | 0.9454 | 0.9903 |
| 0.0685 | 27.0 | 9450 | 0.0166 | 0.7833 | 0.9549 | 0.9931 | nan | 0.9939 | 0.9868 | 0.8270 | 0.9698 | 0.9969 | 0.0 | 0.9898 | 0.9741 | 0.7970 | 0.9470 | 0.9917 |
| 0.0594 | 28.0 | 9800 | 0.0172 | 0.7882 | 0.9705 | 0.9932 | nan | 0.9942 | 0.9849 | 0.9007 | 0.9761 | 0.9965 | 0.0 | 0.9898 | 0.9749 | 0.8251 | 0.9479 | 0.9917 |
| 1.1676 | 29.0 | 10150 | 0.0166 | 0.7867 | 0.9726 | 0.9930 | nan | 0.9948 | 0.9834 | 0.9115 | 0.9777 | 0.9957 | 0.0 | 0.9896 | 0.9741 | 0.8178 | 0.9474 | 0.9915 |
| 0.076 | 30.0 | 10500 | 0.0184 | 0.7845 | 0.9595 | 0.9928 | nan | 0.9925 | 0.9898 | 0.8578 | 0.9598 | 0.9976 | 0.0 | 0.9895 | 0.9728 | 0.8090 | 0.9439 | 0.9917 |
| 0.0709 | 31.0 | 10850 | 0.0187 | 0.7876 | 0.9726 | 0.9931 | nan | 0.9934 | 0.9842 | 0.9118 | 0.9764 | 0.9972 | 0.0 | 0.9897 | 0.9744 | 0.8215 | 0.9480 | 0.9917 |
| 0.2951 | 32.0 | 11200 | 0.0171 | 0.7879 | 0.9701 | 0.9932 | nan | 0.9949 | 0.9853 | 0.8995 | 0.9747 | 0.9961 | 0.0 | 0.9900 | 0.9747 | 0.8226 | 0.9484 | 0.9919 |
| 0.0371 | 33.0 | 11550 | 0.0165 | 0.7863 | 0.9624 | 0.9932 | nan | 0.9941 | 0.9871 | 0.8644 | 0.9696 | 0.9967 | 0.0 | 0.9900 | 0.9742 | 0.8138 | 0.9480 | 0.9920 |
| 0.0374 | 34.0 | 11900 | 0.0183 | 0.7874 | 0.9718 | 0.9929 | nan | 0.9910 | 0.9862 | 0.9089 | 0.9743 | 0.9985 | 0.0 | 0.9891 | 0.9752 | 0.8202 | 0.9490 | 0.9911 |
| 0.7856 | 35.0 | 12250 | 0.0187 | 0.7873 | 0.9710 | 0.9931 | nan | 0.9918 | 0.9860 | 0.9042 | 0.9751 | 0.9981 | 0.0 | 0.9894 | 0.9753 | 0.8192 | 0.9483 | 0.9914 |
| 0.9141 | 36.0 | 12600 | 0.0151 | 0.7892 | 0.9686 | 0.9938 | nan | 0.9946 | 0.9881 | 0.8920 | 0.9712 | 0.9973 | 0.0 | 0.9912 | 0.9759 | 0.8254 | 0.9497 | 0.9929 |
| 0.0195 | 37.0 | 12950 | 0.0169 | 0.7880 | 0.9653 | 0.9932 | nan | 0.9918 | 0.9875 | 0.8770 | 0.9719 | 0.9985 | 0.0 | 0.9897 | 0.9755 | 0.8219 | 0.9493 | 0.9916 |
| 0.0355 | 38.0 | 13300 | 0.0177 | 0.7888 | 0.9717 | 0.9933 | nan | 0.9936 | 0.9843 | 0.9041 | 0.9796 | 0.9969 | 0.0 | 0.9898 | 0.9755 | 0.8272 | 0.9487 | 0.9917 |
| 0.07 | 39.0 | 13650 | 0.0165 | 0.7880 | 0.9736 | 0.9935 | nan | 0.9941 | 0.9848 | 0.9152 | 0.9765 | 0.9973 | 0.0 | 0.9906 | 0.9750 | 0.8209 | 0.9491 | 0.9924 |
| 0.0244 | 40.0 | 14000 | 0.0178 | 0.7889 | 0.9696 | 0.9933 | nan | 0.9927 | 0.9854 | 0.8963 | 0.9758 | 0.9980 | 0.0 | 0.9899 | 0.9753 | 0.8268 | 0.9496 | 0.9919 |
| 0.0679 | 41.0 | 14350 | 0.0157 | 0.7895 | 0.9707 | 0.9936 | nan | 0.9945 | 0.9858 | 0.9012 | 0.9750 | 0.9972 | 0.0 | 0.9908 | 0.9754 | 0.8284 | 0.9499 | 0.9926 |
| 0.0498 | 42.0 | 14700 | 0.0164 | 0.7866 | 0.9765 | 0.9935 | nan | 0.9938 | 0.9839 | 0.9292 | 0.9781 | 0.9976 | 0.0 | 0.9907 | 0.9748 | 0.8122 | 0.9494 | 0.9925 |
| 0.0593 | 43.0 | 15050 | 0.0146 | 0.7881 | 0.9644 | 0.9939 | nan | 0.9953 | 0.9873 | 0.8695 | 0.9730 | 0.9970 | 0.0 | 0.9916 | 0.9756 | 0.8186 | 0.9494 | 0.9932 |
| 0.0068 | 44.0 | 15400 | 0.0151 | 0.7883 | 0.9743 | 0.9938 | nan | 0.9942 | 0.9857 | 0.9191 | 0.9749 | 0.9978 | 0.0 | 0.9913 | 0.9753 | 0.8203 | 0.9498 | 0.9930 |
| 1.2941 | 45.0 | 15750 | 0.0150 | 0.7888 | 0.9714 | 0.9939 | nan | 0.9954 | 0.9862 | 0.9044 | 0.9742 | 0.9968 | 0.0 | 0.9915 | 0.9754 | 0.8228 | 0.9499 | 0.9932 |
| 0.0113 | 46.0 | 16100 | 0.0151 | 0.7893 | 0.9732 | 0.9939 | nan | 0.9943 | 0.9866 | 0.9130 | 0.9741 | 0.9978 | 0.0 | 0.9914 | 0.9759 | 0.8251 | 0.9505 | 0.9930 |
| 0.9812 | 47.0 | 16450 | 0.0185 | 0.7875 | 0.9754 | 0.9933 | nan | 0.9920 | 0.9864 | 0.9257 | 0.9745 | 0.9984 | 0.0 | 0.9898 | 0.9759 | 0.8175 | 0.9503 | 0.9917 |
| 0.0126 | 48.0 | 16800 | 0.0152 | 0.7887 | 0.9743 | 0.9938 | nan | 0.9942 | 0.9856 | 0.9185 | 0.9755 | 0.9976 | 0.0 | 0.9911 | 0.9756 | 0.8221 | 0.9506 | 0.9929 |
| 1.4415 | 49.0 | 17150 | 0.0154 | 0.7894 | 0.9674 | 0.9940 | nan | 0.9952 | 0.9872 | 0.8839 | 0.9734 | 0.9972 | 0.0 | 0.9917 | 0.9759 | 0.8255 | 0.9501 | 0.9934 |
| 0.0285 | 50.0 | 17500 | 0.0162 | 0.7889 | 0.9743 | 0.9937 | nan | 0.9934 | 0.9859 | 0.9183 | 0.9759 | 0.9981 | 0.0 | 0.9909 | 0.9758 | 0.8239 | 0.9504 | 0.9926 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ThuyNT03/PhoBERT-Final_VietNam-aug_insert_w2v
|
ThuyNT03
| 2023-08-31T17:27:53Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T17:16:29Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT-aug_replace_synonym-aug_insert_w2v
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBERT-aug_replace_synonym-aug_insert_w2v
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3269
- Accuracy: 0.68
- F1: 0.6858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.895 | 1.0 | 85 | 0.7598 | 0.65 | 0.5672 |
| 0.5508 | 2.0 | 170 | 0.7204 | 0.69 | 0.6897 |
| 0.3688 | 3.0 | 255 | 0.8039 | 0.72 | 0.7133 |
| 0.2403 | 4.0 | 340 | 0.9418 | 0.66 | 0.6672 |
| 0.1453 | 5.0 | 425 | 1.1062 | 0.67 | 0.6755 |
| 0.1089 | 6.0 | 510 | 1.2567 | 0.68 | 0.6834 |
| 0.0843 | 7.0 | 595 | 1.3071 | 0.67 | 0.6755 |
| 0.0779 | 8.0 | 680 | 1.3269 | 0.68 | 0.6858 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
rossevine/Model_G_S_Berita_Wav2Vec2
|
rossevine
| 2023-08-31T17:17:15Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-19T15:16:57Z |
---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Model_G_S_Berita_Wav2Vec2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model_G_S_Berita_Wav2Vec2
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0232
- Wer: 0.0308
- Cer: 0.0050
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.3802 | 12.5 | 400 | 0.0473 | 0.0692 | 0.0105 |
| 0.0245 | 25.0 | 800 | 0.0232 | 0.0308 | 0.0050 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
matvalan/vittae-cot-llama2
|
matvalan
| 2023-08-31T17:13:30Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-31T17:13:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
dkqjrm/20230831201806
|
dkqjrm
| 2023-08-31T17:11:30Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T11:18:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230831201806'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230831201806
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6291
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 340 | 0.6240 | 0.5 |
| 0.6425 | 2.0 | 680 | 0.6234 | 0.5 |
| 0.6397 | 3.0 | 1020 | 0.6203 | 0.5 |
| 0.6397 | 4.0 | 1360 | 0.6364 | 0.5 |
| 0.6363 | 5.0 | 1700 | 0.7003 | 0.5 |
| 0.6373 | 6.0 | 2040 | 0.6233 | 0.5 |
| 0.6373 | 7.0 | 2380 | 0.6233 | 0.5 |
| 0.637 | 8.0 | 2720 | 0.6515 | 0.5 |
| 0.631 | 9.0 | 3060 | 0.6234 | 0.5 |
| 0.631 | 10.0 | 3400 | 0.6299 | 0.5 |
| 0.633 | 11.0 | 3740 | 0.6315 | 0.5 |
| 0.6325 | 12.0 | 4080 | 0.6281 | 0.5 |
| 0.6325 | 13.0 | 4420 | 0.6434 | 0.5 |
| 0.6267 | 14.0 | 4760 | 0.6233 | 0.5 |
| 0.6323 | 15.0 | 5100 | 0.6253 | 0.5 |
| 0.6323 | 16.0 | 5440 | 0.6233 | 0.5 |
| 0.6325 | 17.0 | 5780 | 0.6314 | 0.5 |
| 0.6274 | 18.0 | 6120 | 0.6265 | 0.5 |
| 0.6274 | 19.0 | 6460 | 0.6298 | 0.5 |
| 0.6301 | 20.0 | 6800 | 0.6363 | 0.5 |
| 0.6268 | 21.0 | 7140 | 0.6296 | 0.5 |
| 0.6268 | 22.0 | 7480 | 0.6402 | 0.5 |
| 0.6316 | 23.0 | 7820 | 0.6282 | 0.5 |
| 0.6272 | 24.0 | 8160 | 0.6233 | 0.5 |
| 0.6314 | 25.0 | 8500 | 0.6245 | 0.5 |
| 0.6314 | 26.0 | 8840 | 0.6702 | 0.5 |
| 0.6298 | 27.0 | 9180 | 0.6484 | 0.5 |
| 0.6282 | 28.0 | 9520 | 0.6235 | 0.5 |
| 0.6282 | 29.0 | 9860 | 0.6524 | 0.5 |
| 0.6259 | 30.0 | 10200 | 0.6245 | 0.5 |
| 0.6271 | 31.0 | 10540 | 0.6233 | 0.5 |
| 0.6271 | 32.0 | 10880 | 0.6320 | 0.5 |
| 0.6264 | 33.0 | 11220 | 0.6240 | 0.5 |
| 0.6265 | 34.0 | 11560 | 0.6325 | 0.5 |
| 0.6265 | 35.0 | 11900 | 0.6329 | 0.5 |
| 0.6268 | 36.0 | 12240 | 0.6377 | 0.5 |
| 0.6261 | 37.0 | 12580 | 0.6234 | 0.5 |
| 0.6261 | 38.0 | 12920 | 0.6323 | 0.5 |
| 0.626 | 39.0 | 13260 | 0.6402 | 0.5 |
| 0.6245 | 40.0 | 13600 | 0.6264 | 0.5 |
| 0.6245 | 41.0 | 13940 | 0.6245 | 0.5 |
| 0.6253 | 42.0 | 14280 | 0.6278 | 0.5 |
| 0.6223 | 43.0 | 14620 | 0.6260 | 0.5 |
| 0.6223 | 44.0 | 14960 | 0.6236 | 0.5 |
| 0.6266 | 45.0 | 15300 | 0.6378 | 0.5 |
| 0.6219 | 46.0 | 15640 | 0.6349 | 0.5 |
| 0.6219 | 47.0 | 15980 | 0.6393 | 0.5 |
| 0.6256 | 48.0 | 16320 | 0.6266 | 0.5 |
| 0.6241 | 49.0 | 16660 | 0.6338 | 0.5 |
| 0.624 | 50.0 | 17000 | 0.6237 | 0.5 |
| 0.624 | 51.0 | 17340 | 0.6265 | 0.5 |
| 0.6214 | 52.0 | 17680 | 0.6259 | 0.5 |
| 0.627 | 53.0 | 18020 | 0.6324 | 0.5 |
| 0.627 | 54.0 | 18360 | 0.6257 | 0.5 |
| 0.6218 | 55.0 | 18700 | 0.6246 | 0.5 |
| 0.621 | 56.0 | 19040 | 0.6242 | 0.5 |
| 0.621 | 57.0 | 19380 | 0.6336 | 0.5 |
| 0.6212 | 58.0 | 19720 | 0.6236 | 0.5 |
| 0.6239 | 59.0 | 20060 | 0.6489 | 0.5 |
| 0.6239 | 60.0 | 20400 | 0.6256 | 0.5 |
| 0.6218 | 61.0 | 20740 | 0.6251 | 0.5 |
| 0.6216 | 62.0 | 21080 | 0.6279 | 0.5 |
| 0.6216 | 63.0 | 21420 | 0.6305 | 0.5 |
| 0.6196 | 64.0 | 21760 | 0.6326 | 0.5 |
| 0.6251 | 65.0 | 22100 | 0.6288 | 0.5 |
| 0.6251 | 66.0 | 22440 | 0.6412 | 0.5 |
| 0.6162 | 67.0 | 22780 | 0.6270 | 0.5 |
| 0.6231 | 68.0 | 23120 | 0.6261 | 0.5 |
| 0.6231 | 69.0 | 23460 | 0.6254 | 0.5 |
| 0.6215 | 70.0 | 23800 | 0.6237 | 0.5 |
| 0.6202 | 71.0 | 24140 | 0.6265 | 0.5 |
| 0.6202 | 72.0 | 24480 | 0.6329 | 0.5 |
| 0.6184 | 73.0 | 24820 | 0.6292 | 0.5 |
| 0.6207 | 74.0 | 25160 | 0.6304 | 0.5 |
| 0.6193 | 75.0 | 25500 | 0.6271 | 0.5 |
| 0.6193 | 76.0 | 25840 | 0.6301 | 0.5 |
| 0.6202 | 77.0 | 26180 | 0.6261 | 0.5 |
| 0.6188 | 78.0 | 26520 | 0.6289 | 0.5 |
| 0.6188 | 79.0 | 26860 | 0.6293 | 0.5 |
| 0.6197 | 80.0 | 27200 | 0.6291 | 0.5 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Donaldbassa/bert-classification-text
|
Donaldbassa
| 2023-08-31T17:10:22Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T16:53:08Z |
---
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
---
|
Writer/InstructPalmyra-20b
|
Writer
| 2023-08-31T17:01:39Z | 1,570 | 40 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"InstructGPT",
"hf",
"palmyra",
"en",
"dataset:Writer/palmyra-data-index",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-28T15:52:03Z |
---
license: apache-2.0
language:
- en
tags:
- InstructGPT
- hf
- palmyra
datasets:
- Writer/palmyra-data-index
---
# InstructPalmyra-20b
- **Developed by:** [https://writer.com/](https://writer.com/);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English;
- **License:** Apache 2.0;
- **Finetuned from model:** [Palmyra-20B](https://huggingface.co/Writer/palmyra-large).
<style>
img {
display: inline;
}
</style>
## Model Description
Introducing InstructPalmyra-20b, a state-of-the-art instruction-following 20b language model designed to deliver exceptional performance and versatility. Derived from the foundational architecture of [Palmyra-20b](https://huggingface.co/Writer/palmyra-large), InstructPalmyra-20b is specifically tailored to address the growing demand for advanced natural language processing and comprehension capabilities.
The InstructPalmyra-20b model is meticulously trained on an extensive dataset of approximately 70,000 instruction-response records. These records are generated by our dedicated Writer Linguist team, who possess considerable expertise in language modeling and fine-tuning techniques. By leveraging their skills and knowledge, the InstructPalmyra-20b model is primed to offer unparalleled proficiency in understanding and executing language-based instructions.
One of the key differentiators of InstructPalmyra-20b lies in its ability to process complex instructions and generate accurate, contextually appropriate responses. This makes it an ideal choice for a wide range of applications, including virtual assistants, customer support, content generation, and more. Additionally, the model's comprehensive training enables it to adapt and perform well under varying conditions and contexts, further expanding its potential use cases.
## Usage :
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "Writer/InstructPalmyra-20b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.float16
)
instruction = "Describe a futuristic device that revolutionizes space travel."
PROMPT_DICT = {
"prompt_input": (
"Below is an instruction that describes a task, paired with an input that provides further context. "
"Write a response that appropriately completes the request\n\n"
"### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:"
),
"prompt_no_input": (
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response:"
),
}
text = (
PROMPT_DICT["prompt_no_input"].format(instruction=instruction)
if not input
else PROMPT_DICT["prompt_input"].format(instruction=instruction, input=input)
)
model_inputs = tokenizer(text, return_tensors="pt").to("cuda")
output_ids = model.generate(
**model_inputs,
max_length=256,
)
output_text = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]
clean_output = output_text.split("### Response:")[1].strip()
print(clean_output)
```
It can also be used with text-generation-inference
```sh
model=Writer/InstructPalmyra-20b
volume=$PWD/data
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference --model-id $model
```
### Limitations and Biases
InstructPalmyra's core functionality is to take a string of text and predict the next token. While language models are widely used for other tasks, there are many unknowns in this work. When prompting InstructPalmyra, keep in mind that the next statistically likely token is not always the token that produces the most "accurate" text. Never rely on InstructPalmyra to produce factually correct results.
InstructPalmyra was trained on Writer’s custom data. As with all language models, it is difficult to predict how InstructPalmyra will respond to specific prompts, and offensive content may appear unexpectedly. We recommend that the outputs be curated or filtered by humans before they are released, both to censor undesirable content and to improve the quality of the results.
## Uses
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
InstructPalmyra-20b is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of InstructPalmyra-20b to develop guardrails and to take appropriate precautions for any production use.
## Citation and Related Information
To cite this model:
```
@misc{InstructPalmyra,
author = {Writer Engineering team},
title = {{InstructPalmyra-20b : Instruct tuned Palmyra-Large model}},
howpublished = {\url{https://dev.writer.com}},
year = 2023,
month = Augest
}
```
[](#model-architecture)|[](#model-architecture)|[](#datasets)|
|
sinepy/market-mail-ai-v3
|
sinepy
| 2023-08-31T16:50:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-31T16:50:01Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
nightdude/config_801
|
nightdude
| 2023-08-31T16:48:53Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-31T16:47:46Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
Faisalx7/CC_Q_and_A_v6_8000
|
Faisalx7
| 2023-08-31T16:31:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-31T16:31:06Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
gameofdimension/lora-trained-xl-erza
|
gameofdimension
| 2023-08-31T16:25:23Z | 2 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-31T13:38:14Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of erza toddler
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - felixdae/lora-trained-xl-erza
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of erza toddler using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
debabrata-ai/Nepali-Named-Entity-Tagger-XLM-R
|
debabrata-ai
| 2023-08-31T16:17:26Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"named-entity-recognition",
"nepali-language",
"ne",
"dataset:wikiann",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-31T12:50:36Z |
---
name: "Nepali Named Entity Tagger"
description: "Fine-tuned XLM-RoBERTa model for Named Entity Recognition in Nepali."
language: ne
datasets:
- wikiann
tags:
- named-entity-recognition
- nepali-language
- xlm-roberta
---
<h>Nepali Named Entity Tagger</h>
Fine-tuned XLM-RoBERTa model for Named Entity Recognition in Nepali.
Label ID and label name
| Label ID | Label Name|
| -------- | ----- |
|0 | O |
| 1 | B-PER |
| 2 | I-PER |
| 3 | B-ORG|
| 4 | I-ORG |
| 5 | B-LOC |
| 6 | I-LOC |
Test Results Summary:
Evaluation Loss: 0.5799
Overall Precision: 0.7949
Overall Recall: 0.7806
Overall F1-score: 0.7877
Overall Accuracy: 0.9021
The model's ability to identify specific entity types:
Location (LOC) F1-score: 0.7417
Organization (ORG) F1-score: 0.7559
Person (PER) F1-score: 0.8696
Usage:
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("debabrata-ai/Nepali-Named-Entity-Tagger-XLM-R")
model = AutoModelForTokenClassification.from_pretrained("debabrata-ai/Nepali-Named-Entity-Tagger-XLM-R")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
input_text = "ईनरह्वील क्लब स्थापना भएको सय वर्ष (शताब्दी) पुरा भएको अवसरमा ईरनह्वील क्लब अफ बुटवलले एक सय वृक्षरोपण गरेको छ । ईनरह्वील क्लब स्थापना भएको अवसरमा ईनरव्हिल क्लब अफ बुटवलले तिलोत्तमा नगरपालिका वडा नम्बर १ मा रहेको शंकनगर सामुदायिक बनमा एक सय वटा टिकको बिरुवा रोपिएको क्लबकी सचिव सोनाली गुरुङले जानकारी दिनुभयो ।क्लब अध्यक्ष सानु श्रेष्ठले पदभार ग्रहण गर्ने कार्यक्रममा एक सय बृक्षरोपण गर्ने कार्यक्रम ल्याउनु भएको थियो । सोही अनुसार यो कार्यक्रम गरिएको उहाँले बताउनु भयो । क्लबले सामाजिक उत्तरदायित्वका कामहरूलाई निरन्तरता दिने पनि अध्यक्ष श्रेष्ठले प्रतिबद्धता व्यक्त गर्नुभयो ।ईनरह्वील क्लब अफ बुटवलका पूर्व अध्यक्ष लक्ष्मी बस्यालले ईनरह्वीलको इतिहासको बारेमा बताउँदै एक सय दिनमा एक सय भन्दा बढी बिरुवा रोपिनु सकारात्मक कुरा भएको बताउनु भयो । शंकनगर सामुदायिक बन उपभोक्ता समूहका अध्यक्ष अध्यक्ष रवीन्द्र बस्नेतले बृक्षरोपण निकै राम्रो कुरा भएपनि बिरुवाको संरक्षण उत्तिकै चुनौतीपूर्ण रहेको बताउनु भयो । उहाँले समय समयमा आफ्नो क्लबले लगाएका बिरुवा हुर्काउन ध्यान दिन आग्रह गर्नुभयो ।शंकनगर सामुदायिक बन उपभोक्ता समूह उपाध्यक्ष कमला ज्ञवाली, सदस्य कृष्ण केसी, लगायतले भनाई राख्नुभएको थियो । क्लब अध्यक्ष सानु श्रेष्ठको अध्यक्षता, सचिव सोनाली गुरुङको सञ्चालन तथा गंगा श्रेष्ठको संयोजनमा सम्पन्न भएको थियो । कार्यक्रम उपभोक्ता समितिका पदाधिकारी सदस्य, क्लबका सदस्यहरूको सहभागिता रहेको थियो "
ner_results = nlp(input_text)
ner_results
```
|
PraveenJesu/whisper-medium-47-peft-V1-drug_1_list
|
PraveenJesu
| 2023-08-31T16:17:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-31T16:17:18Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
Varshitha/llama2-qlora-medicine-fine-tuned
|
Varshitha
| 2023-08-31T16:11:24Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-31T15:47:13Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
vikp/instruct_llama_7b
|
vikp
| 2023-08-31T16:10:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"dataset:vikp/python_code_instructions_filtered",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-31T00:39:09Z |
---
datasets:
- vikp/python_code_instructions_filtered
---
This is code llama 7b finetuned for one epoch on a set of python code and instructions. Scores `.512` in humaneval with greedy decoding (matched to code llama pass@1).
To use in inference, you'll need to set `trust_remote_code = True` to pick up the right rope theta value:
```
from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("vikp/code_llama_7B_hf")
model = AutoModelForCausalLM.from_pretrained("vikp/instruct_llama_7b", trust_remote_code=True)
text = tokenizer.bos_token + """\
import socket
def ping_exponential_backoff(host: str):""".lstrip()
tokens = tokenizer(text, return_tensors="pt")
output = model.generate(**tokens, max_new_tokens=128, do_sample=True, temperature=.1, top_p=1.0)
print(tokenizer.decode(output[0], skip_special_tokens=True).strip())
```
You can duplicate benchmark results with the bigcode eval harness:
```
git clone https://github.com/bigcode-project/bigcode-evaluation-harness.git
cd bigcode-evaluation-harness
pip install -e .
```
```
accelerate launch main.py \
--model vikp/instruct_llama_7b \
--tasks humaneval \
--max_length_generation 1024 \
--temperature 0 \
--do_sample False \
--n_samples 1 \
--precision fp16 \
--allow_code_execution \
--save_generations \
--use_auth_token \
--trust_remote_code
```
|
jalaluddin94/baseline_nli_xlmr
|
jalaluddin94
| 2023-08-31T16:04:52Z | 179 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T16:03:29Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: baseline_nli_xlmr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baseline_nli_xlmr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8598
- Accuracy: 0.6427
- Precision: 0.6427
- Recall: 0.6427
- F1 Score: 0.6445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 12
- eval_batch_size: 12
- seed: 101
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:--------:|
| 1.0805 | 1.0 | 861 | 0.9653 | 0.5594 | 0.5594 | 0.5594 | 0.5629 |
| 0.9651 | 2.0 | 1722 | 0.9542 | 0.5762 | 0.5762 | 0.5762 | 0.5786 |
| 0.9313 | 3.0 | 2583 | 0.9236 | 0.5922 | 0.5922 | 0.5922 | 0.5928 |
| 0.8886 | 4.0 | 3444 | 0.8775 | 0.6409 | 0.6409 | 0.6409 | 0.6429 |
| 0.8656 | 5.0 | 4305 | 0.8727 | 0.6327 | 0.6327 | 0.6327 | 0.6350 |
| 0.8521 | 6.0 | 5166 | 0.8598 | 0.6427 | 0.6427 | 0.6427 | 0.6445 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
BugHunter1/speecht5_finetuned_voxpopuli_nl
|
BugHunter1
| 2023-08-31T16:02:09Z | 82 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:facebook/voxpopuli",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-08-31T12:09:40Z |
---
base_model: SpeechT5
tags:
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_finetuned_facebook_voxpopuli_et
results: []
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_facebook_voxpopuli_et
This model is a fine-tuned version of [SpeechT5](https://huggingface.co/SpeechT5) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4557 | 177.78 | 1000 | 0.4377 |
| 0.4317 | 355.56 | 2000 | 0.4329 |
| 0.4244 | 533.33 | 3000 | 0.4303 |
| 0.4223 | 711.11 | 4000 | 0.4303 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
bhenrym14/airoboros-l2-13b-2.1-PI-16k-fp16
|
bhenrym14
| 2023-08-31T16:02:07Z | 7 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-2.1",
"dataset:kmfoda/booksum",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-31T14:43:12Z |
---
datasets:
- jondurbin/airoboros-2.1
- kmfoda/booksum
---
# RoPE Scaled QLoRA Fine-tune of Llama-2 13b on airoboros-2.1, with Long Context Pretraining (fp16 weights)
## Overview
This is a finetune of Llama-2-13b, intended to extend the useful context window to 16384 tokens via position interpolation (PI). There are two training phases:
1. Scaling the RoPE embeddings by a factor of 0.25 (linear method), train on 16384 token sequences from the `chapter` component of the the [booksum](https://huggingface.co/datasets/kmfoda/booksum) dataset. (one epoch, ~ 150mm tokens)
2. The model was then finetuned on [Jon Durbin's Airoboros 2.1 dataset](https://huggingface.co/datasets/jondurbin/airoboros-2.1), with same scaling approach, for 2 epochs.
**This is a (merged) QLoRA fine-tune (rank 64)**.
The finetune was performed with 1x RTX 6000 Ada.
## How to Use
This model employs linear RoPE scaling, which now has native support in `Transformers` (be sure to update it if you have issues). Use it as you would with any normal context length variant.
Please comment with any questions.
Ooba use: Be sure to increase the `Truncate the prompt up to this length` parameter to 16384 to utilize the full context capabilities.
## Motivation
Given the excellent performance of llama-2 13b finetunes relative to llama 33b, I have received several requests for a 16k model using the latest airoboros dataset. Furthermore, while partial NTK scaling appears to be better for retaining short context performance, it is not natively supported in `transformers` and is thus not as accessible to less technical audiences. This model is designed to offer long context capabilites with the stylistic characteristics of the new airoboros dataset without any additional configuration.
## Relative Performance (wikitext perplexity)
| Context (tokens) | **bhenrym14/airoboros-l2-13b-PI-16k-fp16** | bhenrym14/airophin-v2-13b-PI-8k-fp16 | bhenrym14/airophin-13b-pntk-16k-fp16| bhenrym14/airoboros-13b-gpt4-1.4.1-PI-8192-fp16 |bhenrym14/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-fp16 | jondurbin/airoboros-l2-13b-gpt4-1.4.1 |
| --- | --- | ---| ----- | -----| ------| --- |
| 512 | 7.67 | 7.38 | 7.62 | 8.24 | 7.90 | **7.23** |
| 1024 | 6.15 | 5.99 | 6.20 | 6.71 | 6.17 | **5.85** |
| 2048 | 5.29 | 5.22 | 5.38 | 5.87 | 5.23 | **5.07** |
| 4096 | 4.94 | 4.90 | 5.08 | 5.50 | 4.91 | **4.77** |
| 8192 | **4.71** | **4.71** | 4.90 | 5.32 | Not Tested | 57.1 |
| 12000 | **4.54** | 55 | 4.82 | 56.1 | Not Tested | Not Tested |
- Larger PI scaling factors increase short context performance degradation. If you don't require 16k context, you're better off using a model with a different context extension method, or a smaller (or no) PI scaling factor. Given this, don't expect anything special from this model on the HF leaderboard. Whether or not this is relevant to you will depend on your intended use case.
- Beyond 8k, this model has lower perplexity than all other models tested here.
- I'm actively exploring/implementing other context extension methods that may ameliorate the tendency of PI methods to impair the ability of the model to attend to the context space equally.
## Prompting:
Prompting differs with the airoboros 2.1 models. See [jondurbin/airoboros-l2-13b-2.1](https://huggingface.co/jondurbin/airoboros-l2-13b-2.1)
|
y22ma/Kosmos2-endpoint
|
y22ma
| 2023-08-31T15:58:04Z | 0 | 3 | null |
[
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-08-25T18:36:49Z |
---
license: mit
---
A model card with a customer handler to deploy the [MSFT Kosmos2](https://huggingface.co/spaces/ydshieh/Kosmos-2) model as an inference endpoint. Enjoy!
Hit me up on X/Twitter @yanMachX and let me know what you guys are building!
### Expected Request payload
```json
{
"inputs": "you can just leave this empty, for some reason, inference endpoint expects this",
# base64 encoded string representation of the image blob, here's an example
"image" : "iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAAABGdBTUEAALGPC"
}
```
There is a [python script](https://huggingface.co/y22ma/Kosmos2-endpoint/blob/main/endpoint_tester.py) that provides example API call the inference endpoint.
|
PrimeQA/open-nq-colbert-xlmr-large
|
PrimeQA
| 2023-08-31T15:57:31Z | 35 | 5 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-11-17T02:47:00Z |
---
license: apache-2.0
---
# Model Description
This is a retriever model based on ColBERT v2 with [bert-base-uncased](https://huggingface.co/bert-base-uncased) language model.<br>
This model was trained with the OpenNQ data.<br>
The architecture of the model and hyper parameters are described in the paper ‘Relevance-guided Supervision for OpenQA with ColBERT’.
## Intended uses & limitations
This model uses the xlm-roberta-large LM. Biases associated with the pre-trained language model we used may be present in this ColBERT v2 model.
## Usage
This model can be used with [PrimeQA](https://github.com/primeqa/primeqa)’s [ColBERT](https://github.com/primeqa/primeqa/blob/main/primeqa/ir/README.md) engine.
## BibTeX entry and citation info
```bibtex
@article{Khattab2021RelevanceguidedSF,
title={Relevance-guided Supervision for OpenQA with ColBERT},
author={O. Khattab and Christopher Potts and Matei A. Zaharia},
journal={Transactions of the Association for Computational Linguistics},
year={2021},
}
```
```bibtex
@article{Lee2019LatentRF,
title={Latent Retrieval for Weakly Supervised Open Domain Question Answering},
author={Kenton Lee and Ming-Wei Chang and Kristina Toutanova},
journal={ACL},
year={2019}
}
```
|
profetize/bert-base-cased-wikitext2
|
profetize
| 2023-08-31T15:55:20Z | 195 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-08-31T15:41:56Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.1015 | 1.0 | 2346 | 7.0572 |
| 6.8896 | 2.0 | 4692 | 6.8889 |
| 6.8745 | 3.0 | 7038 | 6.8896 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
rrozb/q-FrozenLake-v1-4x4-noSlippery
|
rrozb
| 2023-08-31T15:54:55Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-31T15:54:52Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rrozb/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AnikaAI/bert-finetuned-ner
|
AnikaAI
| 2023-08-31T15:51:40Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-31T07:11:23Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9357296670531721
- name: Recall
type: recall
value: 0.9506900033658701
- name: F1
type: f1
value: 0.9431505133984472
- name: Accuracy
type: accuracy
value: 0.9867251427562254
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0578
- Precision: 0.9357
- Recall: 0.9507
- F1: 0.9432
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0793 | 1.0 | 1756 | 0.0803 | 0.9096 | 0.9345 | 0.9219 | 0.9795 |
| 0.041 | 2.0 | 3512 | 0.0537 | 0.9267 | 0.9465 | 0.9365 | 0.9859 |
| 0.025 | 3.0 | 5268 | 0.0578 | 0.9357 | 0.9507 | 0.9432 | 0.9867 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ardt-multipart/ardt-multipart-ppo_train_walker2d_level-3108_1436-99
|
ardt-multipart
| 2023-08-31T15:45:37Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-31T13:38:12Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-multipart-ppo_train_walker2d_level-3108_1436-99
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-multipart-ppo_train_walker2d_level-3108_1436-99
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dhmeltzer/llama-7b-SFT-qlora-wiki_DPO_ds_RM_random_1024_r_64_alpha_16
|
dhmeltzer
| 2023-08-31T15:45:04Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:dhmeltzer/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged",
"base_model:finetune:dhmeltzer/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged",
"region:us"
] | null | 2023-08-31T06:52:56Z |
---
base_model: dhmeltzer/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged
tags:
- generated_from_trainer
model-index:
- name: llama-7b-SFT-qlora-wiki_DPO_ds_RM_random_1024_r_64_alpha_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-7b-SFT-qlora-wiki_DPO_ds_RM_random_1024_r_64_alpha_16
This model is a fine-tuned version of [dhmeltzer/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged](https://huggingface.co/dhmeltzer/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6801
- Rewards/chosen: -0.1790
- Rewards/rejected: -0.2369
- Rewards/accuracies: 0.5469
- Rewards/margins: 0.0578
- Logps/rejected: -206.1121
- Logps/chosen: -202.9860
- Logits/rejected: 1.1465
- Logits/chosen: 1.1674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6904 | 0.1 | 19 | 0.6904 | -0.3143 | -0.3636 | 0.5458 | 0.0493 | -207.3793 | -204.3384 | 1.1224 | 1.1416 |
| 0.6725 | 0.21 | 38 | 0.6850 | -0.3901 | -0.4540 | 0.5547 | 0.0640 | -208.2836 | -205.0964 | 1.1270 | 1.1469 |
| 0.6818 | 0.31 | 57 | 0.6801 | -0.1790 | -0.2369 | 0.5469 | 0.0578 | -206.1121 | -202.9860 | 1.1465 | 1.1674 |
| 0.6671 | 0.41 | 76 | 0.6863 | -0.2598 | -0.3469 | 0.5580 | 0.0871 | -207.2126 | -203.7936 | 1.1468 | 1.1665 |
| 0.6683 | 0.52 | 95 | 0.6841 | -0.1475 | -0.2325 | 0.5502 | 0.0851 | -206.0687 | -202.6704 | 1.1388 | 1.1590 |
| 0.6626 | 0.62 | 114 | 0.6846 | -0.0836 | -0.1600 | 0.5480 | 0.0764 | -205.3429 | -202.0314 | 1.1263 | 1.1474 |
| 0.6593 | 0.72 | 133 | 0.6864 | -0.1272 | -0.2184 | 0.5625 | 0.0912 | -205.9276 | -202.4675 | 1.1106 | 1.1306 |
| 0.672 | 0.83 | 152 | 0.6857 | -0.1452 | -0.2334 | 0.5592 | 0.0882 | -206.0777 | -202.6477 | 1.1086 | 1.1293 |
| 0.6671 | 0.93 | 171 | 0.6855 | -0.1472 | -0.2350 | 0.5547 | 0.0878 | -206.0934 | -202.6673 | 1.1071 | 1.1270 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
yaohuacn/ppo-SnowballTarget2
|
yaohuacn
| 2023-08-31T15:44:53Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-31T15:44:43Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: yaohuacn/ppo-SnowballTarget2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CyberHarem/hanabatake_yoshie_ahogirl
|
CyberHarem
| 2023-08-31T15:40:01Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/hanabatake_yoshie_ahogirl",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-31T10:10:49Z |
---
license: mit
datasets:
- CyberHarem/hanabatake_yoshie_ahogirl
pipeline_tag: text-to-image
tags:
- art
---
# Lora of hanabatake_yoshie_ahogirl
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4800, you need to download `4800/hanabatake_yoshie_ahogirl.pt` as the embedding and `4800/hanabatake_yoshie_ahogirl.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4800**, with the score of 0.737. The trigger words are:
1. `hanabatake_yoshie_ahogirl`
2. `brown_hair, open_mouth, long_hair, smile, red_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:---------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 6000 | 0.688 | [Download](6000/hanabatake_yoshie_ahogirl.zip) |  |  |  |  |  | [<NSFW, click to see>](6000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) |  |  |
| 5600 | 0.653 | [Download](5600/hanabatake_yoshie_ahogirl.zip) |  |  |  |  |  | [<NSFW, click to see>](5600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5600/previews/nude.png) | [<NSFW, click to see>](5600/previews/nude2.png) |  |  |
| 5200 | 0.696 | [Download](5200/hanabatake_yoshie_ahogirl.zip) |  |  |  |  |  | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| **4800** | **0.737** | [**Download**](4800/hanabatake_yoshie_ahogirl.zip) |  |  |  |  |  | [<NSFW, click to see>](4800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) |  |  |
| 4400 | 0.636 | [Download](4400/hanabatake_yoshie_ahogirl.zip) |  |  |  |  |  | [<NSFW, click to see>](4400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4400/previews/nude.png) | [<NSFW, click to see>](4400/previews/nude2.png) |  |  |
| 4000 | 0.661 | [Download](4000/hanabatake_yoshie_ahogirl.zip) |  |  |  |  |  | [<NSFW, click to see>](4000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4000/previews/nude.png) | [<NSFW, click to see>](4000/previews/nude2.png) |  |  |
| 3600 | 0.579 | [Download](3600/hanabatake_yoshie_ahogirl.zip) |  |  |  |  |  | [<NSFW, click to see>](3600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3600/previews/nude.png) | [<NSFW, click to see>](3600/previews/nude2.png) |  |  |
| 3200 | 0.573 | [Download](3200/hanabatake_yoshie_ahogirl.zip) |  |  |  |  |  | [<NSFW, click to see>](3200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3200/previews/nude.png) | [<NSFW, click to see>](3200/previews/nude2.png) |  |  |
| 2800 | 0.446 | [Download](2800/hanabatake_yoshie_ahogirl.zip) |  |  |  |  |  | [<NSFW, click to see>](2800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2800/previews/nude.png) | [<NSFW, click to see>](2800/previews/nude2.png) |  |  |
| 2400 | 0.509 | [Download](2400/hanabatake_yoshie_ahogirl.zip) |  |  |  |  |  | [<NSFW, click to see>](2400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) |  |  |
| 2000 | 0.393 | [Download](2000/hanabatake_yoshie_ahogirl.zip) |  |  |  |  |  | [<NSFW, click to see>](2000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) |  |  |
| 1600 | 0.525 | [Download](1600/hanabatake_yoshie_ahogirl.zip) |  |  |  |  |  | [<NSFW, click to see>](1600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1600/previews/nude.png) | [<NSFW, click to see>](1600/previews/nude2.png) |  |  |
| 1200 | 0.408 | [Download](1200/hanabatake_yoshie_ahogirl.zip) |  |  |  |  |  | [<NSFW, click to see>](1200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [<NSFW, click to see>](1200/previews/nude2.png) |  |  |
| 800 | 0.355 | [Download](800/hanabatake_yoshie_ahogirl.zip) |  |  |  |  |  | [<NSFW, click to see>](800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](800/previews/nude.png) | [<NSFW, click to see>](800/previews/nude2.png) |  |  |
| 400 | 0.042 | [Download](400/hanabatake_yoshie_ahogirl.zip) |  |  |  |  |  | [<NSFW, click to see>](400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](400/previews/nude.png) | [<NSFW, click to see>](400/previews/nude2.png) |  |  |
|
profetize/gpt2-wikitext2
|
profetize
| 2023-08-31T15:39:10Z | 227 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-30T20:01:54Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5344 | 1.0 | 2256 | 6.4312 |
| 6.1159 | 2.0 | 4512 | 6.1643 |
| 5.9902 | 3.0 | 6768 | 6.0758 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Korkkork/atsushisakurai
|
Korkkork
| 2023-08-31T15:34:34Z | 0 | 0 | null |
[
"vkei",
"Artist",
"Bucktick",
"license:openrail",
"region:us"
] | null | 2023-08-31T15:33:22Z |
---
license: openrail
tags:
- vkei
- Artist
- Bucktick
---
|
mgmeskill/downstrike-40m
|
mgmeskill
| 2023-08-31T15:33:34Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-08-31T15:31:34Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mgmeskill/downstrike-40m
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Brainclub5000/5z6k-ldz0-3ch1
|
Brainclub5000
| 2023-08-31T15:32:40Z | 2 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"region:us"
] |
text-to-image
| 2023-08-31T15:32:40Z |
---
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: photos of people on the beach in napoli
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Test enoder was not trained.
|
DunnBC22/bert-large-uncased-Fake_Reviews_Classifier
|
DunnBC22
| 2023-08-31T15:30:54Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:google-bert/bert-large-uncased",
"base_model:finetune:google-bert/bert-large-uncased",
"license:apache-2.0",
"region:us"
] | null | 2023-08-08T03:08:29Z |
---
license: apache-2.0
base_model: bert-large-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: bert-large-uncased-Fake_Reviews_Classifier
results: []
---
# bert-large-uncased-Fake_Reviews_Classifier
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased).
It achieves the following results on the evaluation set:
- Loss: 0.5336
- Accuracy: 0.8381
- F1
- Weighted: 0.8142
- Micro: 0.8381
- Macro: 0.6308
- Recall
- Weighted: 0.8381
- Micro: 0.8381
- Macro: 0.6090
- Precision
- Weighted: 0.8101
- Micro: 0.8381
- Macro: 0.7029
## Model description
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Binary%20Classification/Fake%20Reviews/Fake%20Reviews%20Classification%20-%20BERT-Large%20With%20PEFT.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology. You are welcome to test and experiment with this model, but it is at your own risk/peril.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/razamukhtar007/fake-reviews
__Histogram of Word Counts of Reviews__

__Class Distribution__

## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted F1 | Micro F1 | Macro F1 | Weighted Recall | Micro Recall | Macro Recall | Weighted Precision | Micro Precision | Macro Precision |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 0.633 | 1.0 | 10438 | 0.5608 | 0.8261 | 0.7914 | 0.8261 | __0.5745__ | 0.8261 | 0.8261 | 0.5643 | 0.7844 | 0.8261 | 0.6542 |
| 0.6029 | 2.0 | 20876 | 0.6490 | 0.8331 | 0.7724 | 0.8331 | __0.5060__ | 0.8331 | 0.8331 | 0.5239 | 0.7892 | 0.8331 | 0.6929 |
| 0.5478 | 3.0 | 31314 | 0.5508 | 0.8305 | 0.8071 | 0.8305 | __0.6189__ | 0.8305 | 0.8305 | 0.6003 | 0.8002 | 0.8305 | 0.6784 |
| 0.513 | 4.0 | 41752 | 0.5459 | 0.8347 | 0.8101 | 0.8347 | __0.6224__ | 0.8347 | 0.8347 | 0.6023 | 0.8049 | 0.8347 | 0.6916 |
| 0.5288 | 5.0 | 52190 | 0.5336 | 0.8381 | 0.8142 | 0.8381 | __0.6308__ | 0.8381 | 0.8381 | 0.6090 | 0.8101 | 0.8381 | 0.7029 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
haih2/open-calm-7b-summarizer-lora
|
haih2
| 2023-08-31T15:16:08Z | 32 | 1 |
peft
|
[
"peft",
"text-generation",
"ja",
"arxiv:2305.14314",
"region:us"
] |
text-generation
| 2023-08-29T11:57:48Z |
---
library_name: peft
language:
- ja
pipeline_tag: text-generation
---
# Fine-tuned OpenCALM-7B Adapters for Meeting Summarization
## Description
These are weights for LoRA adapters fine-tuned on the OpenCALM-7B ([Andonian et al., 2021](https://huggingface.co/cyberagent/open-calm-7b)) model for Japanese meeting summarization.
## Usage
### Load model and tokenizer
Loading the model in the 4-bit quantized format is recommended to get reliable results since these LoRA adapters were trained by using QLoRA ([Dettmers et al., 2023](https://arxiv.org/abs/2305.14314)).
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained("cyberagent/open-calm-7b")
model = AutoModelForCausalLM.from_pretrained(
"cyberagent/open-calm-7b",
quantization_config=bnb_config,
device_map="auto"
)
model = PeftModel.from_pretrained(model, "haih2/open-calm-7b-summarizer-lora")
```
### Generate summary
In the prompt provided to the model:
* The first part is the length of the summary to be generated,
* and The second part is the source meeting to be summarized.
```python
prompt = "この段落の要約50字以内生成:次に、私立高校の生徒に対する留学支援についてでございますが、都内の私立高校は、それぞれの学校における教育方針に基づきまして、生徒の留学先として海外の学校と提携するなど、既にさまざまな独自の取り組みを進めております。\\nこうした状況等を踏まえ、私立高校を対象とした留学支援のあり方について、今後検討してまいります。\\n\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
tokens = model.generate(
**inputs,
max_new_tokens=256,
do_sample=True,
temperature=0.7,
top_k=32,
top_p=0.9,
repetition_penalty=1.0,
no_repeat_ngram_size=0,
pad_token_id=tokenizer.pad_token_id,
)
output = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(output)
```
## Prompt Format
Any prompt is fine, but it is suggested to have `length` and `source` parts as follows:
```
"この段落を{length}に要約しなさい:{source}\n要約:"
```
or
```
"この段落の要約{length}生成:{source}\n"
```
## Fine-tuning Details
### Dataset
* [Congressional meeting's minutes](https://github.com/kmr-y/NTCIR14-QALab-PoliInfo-FormalRunDataset/tree/master) provided by QA Lab PoliInfo.
### Fine-tuning procedure
The OpenCALM-7B model was fine-tuned on the above dataset using the QLoRA method with prompt `この段落の要約{length}生成:{source}\n`. We outline the following hyperparameters:
|||
|----------------|----------------:|
| **Optimizer** <br>   beta_1 <br>   beta_2 <br>   weight decay | AdamW <br> 0.9 <br> 0.999 <br> 0.01 |
| **Learning rate** <br>   scheduler type | 2e-5 <br> linear |
| **LoRA** <br>   target modules <br>   r <br>   alpha <br>   dropout | <br> query_key_value, dense <br> 4 <br> 64 <br> 0.05 |
| **Quantization (for QLoRA)** <br>   compute dtype <br>   storage dtype <br>   quantization strategy | <br> float16 <br> nf4 <br> double quantization |
| **Sequence length** | 1536 |
| **Batch size** | 4 |
| **Gradient accumulation steps** | 2 |
| **Epochs** | 10 |
| **Warmup steps** | 200 |
## Evaluation
### Testing data & Metric
We evaluated the model on two sets: one for *multi-topic* summarization and the other for *single-topic* summarization. ROUGE-L (F1-score-based) with the [Japanese Mecab tokenizer](https://pypi.org/project/mecab-python3/) was used as the evaluation metric.
### Results
| Solution/Model | ROUGE-L <br> (multi-topic) | ROUGE-L <br> (single-topic) |
|----------------|:--------------------------:|:---------------------------:|
|1st place solution* |34.12 |**34.44**|
|2nd place solution* |32.79 |33.65 |
|*OpenCALM-7B (QLoRA)*|***36.75***|*33.31* |
*\* These scores are extracted from this [leaderboard](https://github.com/PoliInfo/PoliInfo.github.io/blob/master/FormalRunResult.md) for the summarization task.*
|
alexdbz/bert-base-peft-Lora-abstracts-2epochs
|
alexdbz
| 2023-08-31T15:06:14Z | 8 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-31T15:06:10Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
dkqjrm/20230831190406
|
dkqjrm
| 2023-08-31T15:03:07Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T10:04:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230831190406'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230831190406
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6234
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 340 | 0.6536 | 0.5 |
| 0.6466 | 2.0 | 680 | 0.6207 | 0.5 |
| 0.6506 | 3.0 | 1020 | 0.6654 | 0.5 |
| 0.6506 | 4.0 | 1360 | 0.6698 | 0.5 |
| 0.6458 | 5.0 | 1700 | 0.6234 | 0.5 |
| 0.6363 | 6.0 | 2040 | 0.6246 | 0.5 |
| 0.6363 | 7.0 | 2380 | 0.6367 | 0.5 |
| 0.6401 | 8.0 | 2720 | 0.6582 | 0.5 |
| 0.6347 | 9.0 | 3060 | 0.6257 | 0.5 |
| 0.6347 | 10.0 | 3400 | 0.6281 | 0.5 |
| 0.6378 | 11.0 | 3740 | 0.6234 | 0.5 |
| 0.637 | 12.0 | 4080 | 0.6274 | 0.5 |
| 0.637 | 13.0 | 4420 | 0.6362 | 0.5 |
| 0.6313 | 14.0 | 4760 | 0.6290 | 0.5 |
| 0.6359 | 15.0 | 5100 | 0.6302 | 0.5 |
| 0.6359 | 16.0 | 5440 | 0.6246 | 0.5 |
| 0.639 | 17.0 | 5780 | 0.6319 | 0.5 |
| 0.6302 | 18.0 | 6120 | 0.6255 | 0.5 |
| 0.6302 | 19.0 | 6460 | 0.6325 | 0.5 |
| 0.6329 | 20.0 | 6800 | 0.6434 | 0.5 |
| 0.6309 | 21.0 | 7140 | 0.6238 | 0.5 |
| 0.6309 | 22.0 | 7480 | 0.6237 | 0.5 |
| 0.6325 | 23.0 | 7820 | 0.6296 | 0.5 |
| 0.6303 | 24.0 | 8160 | 0.6249 | 0.5 |
| 0.6357 | 25.0 | 8500 | 0.6235 | 0.5 |
| 0.6357 | 26.0 | 8840 | 0.6258 | 0.5 |
| 0.6327 | 27.0 | 9180 | 0.6442 | 0.5 |
| 0.6309 | 28.0 | 9520 | 0.6329 | 0.5 |
| 0.6309 | 29.0 | 9860 | 0.6374 | 0.5 |
| 0.6304 | 30.0 | 10200 | 0.6243 | 0.5 |
| 0.6311 | 31.0 | 10540 | 0.6302 | 0.5 |
| 0.6311 | 32.0 | 10880 | 0.6247 | 0.5 |
| 0.6294 | 33.0 | 11220 | 0.6233 | 0.5 |
| 0.6303 | 34.0 | 11560 | 0.6252 | 0.5 |
| 0.6303 | 35.0 | 11900 | 0.6365 | 0.5 |
| 0.63 | 36.0 | 12240 | 0.6300 | 0.5 |
| 0.6304 | 37.0 | 12580 | 0.6290 | 0.5 |
| 0.6304 | 38.0 | 12920 | 0.6243 | 0.5 |
| 0.6288 | 39.0 | 13260 | 0.6440 | 0.5 |
| 0.6298 | 40.0 | 13600 | 0.6260 | 0.5 |
| 0.6298 | 41.0 | 13940 | 0.6296 | 0.5 |
| 0.6292 | 42.0 | 14280 | 0.6245 | 0.5 |
| 0.6255 | 43.0 | 14620 | 0.6253 | 0.5 |
| 0.6255 | 44.0 | 14960 | 0.6459 | 0.5 |
| 0.631 | 45.0 | 15300 | 0.6321 | 0.5 |
| 0.6248 | 46.0 | 15640 | 0.6314 | 0.5 |
| 0.6248 | 47.0 | 15980 | 0.6335 | 0.5 |
| 0.6293 | 48.0 | 16320 | 0.6240 | 0.5 |
| 0.6285 | 49.0 | 16660 | 0.6238 | 0.5 |
| 0.6277 | 50.0 | 17000 | 0.6247 | 0.5 |
| 0.6277 | 51.0 | 17340 | 0.6378 | 0.5 |
| 0.625 | 52.0 | 17680 | 0.6237 | 0.5 |
| 0.6301 | 53.0 | 18020 | 0.6246 | 0.5 |
| 0.6301 | 54.0 | 18360 | 0.6236 | 0.5 |
| 0.6247 | 55.0 | 18700 | 0.6237 | 0.5 |
| 0.6253 | 56.0 | 19040 | 0.6252 | 0.5 |
| 0.6253 | 57.0 | 19380 | 0.6261 | 0.5 |
| 0.6243 | 58.0 | 19720 | 0.6250 | 0.5 |
| 0.6268 | 59.0 | 20060 | 0.6387 | 0.5 |
| 0.6268 | 60.0 | 20400 | 0.6233 | 0.5 |
| 0.625 | 61.0 | 20740 | 0.6239 | 0.5 |
| 0.6245 | 62.0 | 21080 | 0.6233 | 0.5 |
| 0.6245 | 63.0 | 21420 | 0.6256 | 0.5 |
| 0.6232 | 64.0 | 21760 | 0.6263 | 0.5 |
| 0.6279 | 65.0 | 22100 | 0.6233 | 0.5 |
| 0.6279 | 66.0 | 22440 | 0.6339 | 0.5 |
| 0.6185 | 67.0 | 22780 | 0.6237 | 0.5 |
| 0.627 | 68.0 | 23120 | 0.6246 | 0.5 |
| 0.627 | 69.0 | 23460 | 0.6241 | 0.5 |
| 0.6242 | 70.0 | 23800 | 0.6254 | 0.5 |
| 0.6229 | 71.0 | 24140 | 0.6236 | 0.5 |
| 0.6229 | 72.0 | 24480 | 0.6242 | 0.5 |
| 0.621 | 73.0 | 24820 | 0.6238 | 0.5 |
| 0.6226 | 74.0 | 25160 | 0.6237 | 0.5 |
| 0.6222 | 75.0 | 25500 | 0.6233 | 0.5 |
| 0.6222 | 76.0 | 25840 | 0.6244 | 0.5 |
| 0.6224 | 77.0 | 26180 | 0.6234 | 0.5 |
| 0.6212 | 78.0 | 26520 | 0.6239 | 0.5 |
| 0.6212 | 79.0 | 26860 | 0.6238 | 0.5 |
| 0.6222 | 80.0 | 27200 | 0.6234 | 0.5 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jackoyoungblood/distilhubert-finetuned-gtzan
|
jackoyoungblood
| 2023-08-31T15:02:10Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-29T19:37:33Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.89
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6889
- Accuracy: 0.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00018
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7089 | 1.0 | 113 | 1.3908 | 0.47 |
| 1.0384 | 2.0 | 226 | 1.0306 | 0.65 |
| 0.9678 | 3.0 | 339 | 0.9619 | 0.66 |
| 0.9463 | 4.0 | 452 | 0.5874 | 0.8 |
| 0.5288 | 5.0 | 565 | 0.6033 | 0.83 |
| 0.1325 | 6.0 | 678 | 0.6730 | 0.87 |
| 0.2124 | 7.0 | 791 | 0.7158 | 0.84 |
| 0.0054 | 8.0 | 904 | 0.7187 | 0.86 |
| 0.004 | 9.0 | 1017 | 0.6297 | 0.88 |
| 0.0026 | 10.0 | 1130 | 0.6889 | 0.89 |
### Framework versions
- Transformers 4.32.1
- Pytorch 1.13.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dt-and-vanilla-ardt/ardt-vanilla-ppo_train_walker2d_level-3108_1426-99
|
dt-and-vanilla-ardt
| 2023-08-31T15:01:31Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-31T13:27:43Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-vanilla-ppo_train_walker2d_level-3108_1426-99
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-vanilla-ppo_train_walker2d_level-3108_1426-99
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.