modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 18:27:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 18:22:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
JamesNg12/my_awesome_model
|
JamesNg12
| 2023-06-20T00:35:46Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-20T00:21:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93048
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2331
- Accuracy: 0.9305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2334 | 1.0 | 1563 | 0.1881 | 0.9280 |
| 0.1504 | 2.0 | 3126 | 0.2331 | 0.9305 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
peteryeung/ppo-LunarLander-v2
|
peteryeung
| 2023-06-20T00:05:28Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-20T00:05:12Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 284.00 +/- 16.59
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AustinCarthy/MixGPT2V2_suffix_100KP_BFall_fromB_95K_topP_0.75_ratio2.63
|
AustinCarthy
| 2023-06-19T23:34:55Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-19T21:18:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: MixGPT2V2_suffix_100KP_BFall_fromB_95K_topP_0.75_ratio2.63
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MixGPT2V2_suffix_100KP_BFall_fromB_95K_topP_0.75_ratio2.63
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_MixGPT2V2_using_benign_95K_top_p_0.75suffix dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0286
- Accuracy: 0.9964
- F1: 0.9612
- Precision: 0.9728
- Recall: 0.95
- Roc Auc Score: 0.9743
- Tpr At Fpr 0.01: 0.7924
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0218 | 1.0 | 22121 | 0.0193 | 0.9952 | 0.9485 | 0.9717 | 0.9264 | 0.9625 | 0.7698 |
| 0.013 | 2.0 | 44242 | 0.0213 | 0.9957 | 0.9546 | 0.9675 | 0.942 | 0.9702 | 0.799 |
| 0.0041 | 3.0 | 66363 | 0.0262 | 0.9951 | 0.9494 | 0.9395 | 0.9596 | 0.9783 | 0.792 |
| 0.0034 | 4.0 | 88484 | 0.0223 | 0.9964 | 0.9618 | 0.9657 | 0.958 | 0.9781 | 0.8558 |
| 0.001 | 5.0 | 110605 | 0.0286 | 0.9964 | 0.9612 | 0.9728 | 0.95 | 0.9743 | 0.7924 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
natope/closed-book-19-06-2023
|
natope
| 2023-06-19T23:25:24Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-19T21:46:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: closed-book-19-06-2023
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# closed-book-19-06-2023
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3172
- Rouge1: 0.1403
- Rouge2: 0.039
- Rougel: 0.117
- Rougelsum: 0.117
- Gen Len: 17.9153
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 4.3252 | 1.0 | 5736 | 3.4360 | 0.136 | 0.0339 | 0.114 | 0.1139 | 17.5869 |
| 4.0744 | 2.0 | 11472 | 3.3393 | 0.1404 | 0.038 | 0.117 | 0.1169 | 17.971 |
| 4.0149 | 3.0 | 17208 | 3.3172 | 0.1403 | 0.039 | 0.117 | 0.117 | 17.9153 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
andywalner/ppo-LunarLander-v2
|
andywalner
| 2023-06-19T23:21:42Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T23:21:22Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 247.80 +/- 22.95
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MindNetML/ppo-LunarLander-v2
|
MindNetML
| 2023-06-19T23:07:39Z | 1 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T23:07:18Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.22 +/- 28.48
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aphi/dqn-SpaceInvadersNoFrameskip-v4_1
|
aphi
| 2023-06-19T23:07:20Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T23:06:48Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 330.50 +/- 71.74
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga aphi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga aphi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga aphi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 500000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
C-Lo/finetuning-sentiment-gendered-dataset
|
C-Lo
| 2023-06-19T22:58:29Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T22:55:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-gendered-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-gendered-dataset
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
sid/ppo-Huggy
|
sid
| 2023-06-19T22:53:24Z | 15 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-19T22:52:44Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: sid/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
MarketingHHM/autotrain-hhmqatest23-68104137216
|
MarketingHHM
| 2023-06-19T22:52:12Z | 98 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"led",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:MarketingHHM/autotrain-data-hhmqatest23",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-06-19T22:31:26Z |
---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- MarketingHHM/autotrain-data-hhmqatest23
co2_eq_emissions:
emissions: 14.037553452269616
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 68104137216
- CO2 Emissions (in grams): 14.0376
## Validation Metrics
- Loss: 0.920
- Rouge1: 34.783
- Rouge2: 23.625
- RougeL: 29.390
- RougeLsum: 32.868
- Gen Len: 109.840
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/MarketingHHM/autotrain-hhmqatest23-68104137216
```
|
gokuls/hbertv1-Massive-intent_48_KD
|
gokuls
| 2023-06-19T22:47:54Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T22:38:50Z |
---
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: hbertv1-Massive-intent_48_KD
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: validation
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.8357107722577471
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-Massive-intent_48_KD
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48_KD](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48_KD) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8470
- Accuracy: 0.8357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.994 | 1.0 | 180 | 2.1475 | 0.3901 |
| 1.7222 | 2.0 | 360 | 1.4146 | 0.6011 |
| 1.1889 | 3.0 | 540 | 1.1690 | 0.6990 |
| 0.9256 | 4.0 | 720 | 0.9700 | 0.7545 |
| 0.763 | 5.0 | 900 | 0.8986 | 0.7806 |
| 0.6351 | 6.0 | 1080 | 0.8898 | 0.7787 |
| 0.5374 | 7.0 | 1260 | 0.8604 | 0.7978 |
| 0.4587 | 8.0 | 1440 | 0.8444 | 0.8101 |
| 0.3822 | 9.0 | 1620 | 0.8520 | 0.8087 |
| 0.3301 | 10.0 | 1800 | 0.8309 | 0.8185 |
| 0.2713 | 11.0 | 1980 | 0.8313 | 0.8249 |
| 0.2257 | 12.0 | 2160 | 0.8499 | 0.8254 |
| 0.1947 | 13.0 | 2340 | 0.8375 | 0.8298 |
| 0.162 | 14.0 | 2520 | 0.8428 | 0.8352 |
| 0.1369 | 15.0 | 2700 | 0.8470 | 0.8357 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
dogruermikail/bert-fine-tuned-stock-sentiment-uncased
|
dogruermikail
| 2023-06-19T22:39:00Z | 61 | 3 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"tr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T18:29:35Z |
---
model-index:
- name: bert-fine-tuned-stock-sentiment-uncased
results: []
language:
- tr
metrics:
- accuracy
- f1
- precision
- recall
widget:
- text: "bugün tavan olabilir alımlar iyi"
example_title: "Positive"
- text: "üst kanala değdi çekilme bekliyorum"
example_title: "Negative"
- text: "bedelsiz tarihi belli mi?"
example_title: "Neutral"
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
## **Bert Uncased Model Fine Tuned For Stock Sentiment**
- This model is a fine-tuned version of the BERT (Bidirectional Encoder Representations from Transformers) model specifically
designed for analyzing stock sentiment. The fine-tuning process involved training the model on tagged comments from the last
two pages of the stock form on the Investing platform, focusing on stocks listed in the BIST Index.
### Stock List:
- ACSEL, ADEL, ARCLK, ASELS, AZTEK, BIMAS, BFREN, BMSCH,
- CCOLA, CIMSA, CMBTN, CWENE,EKGYO, ENJSA, EREGL, FROTO,
- GOODY, GUBRF, HALKB, HEKTS, ISCTR, KCHOL, KOZAL, KOPOL,
- KRDMD, ONCSM, PETKM, PKART, SAHOL, SASA, SISE, SMRTG,
- THYAO, TMSN, TCELL, TTKOM, TOASO, TTRAK, TUPRS, VESTL, YAPRK, YKSLN
**This fine-tuned model aims to provide insights into the sentiment of these stocks based on the given tagged comments and
can be used for stock sentiment analysis in financial applications.**
[Colab File](https://colab.research.google.com/drive/1LqEqoeS90nxgXApS6GSwcNnhKBmmYUSY?usp=sharing)
### Training hyperparameters
Training Hyperparameters:
The following hyperparameters were used during training:
- Optimizer: SGD
- Learning Rate: 3e-2
- Number of Training Epochs: 10
- Metric for Best Model: F1 Score
### Training Results
| **Epoch** | **Training Loss** | **Validation Loss** | **Accuracy** | **Precision** | **Recall** | **F1 Score** |
|-----------|-------------------|---------------------|--------------|---------------|------------|--------------|
| 1 | 1.057400 | 0.895725 | 0.621538 | 0.618631 | 0.612559 | 0.611949 |
| 2 | 0.908400 | 0.822652 | 0.632308 | 0.644781 | 0.629953 | 0.622661 |
| 3 | 0.812100 | 0.788586 | 0.656923 | 0.680735 | 0.659374 | 0.650310 |
| 4 | 0.747700 | 0.737312 | 0.667692 | 0.670311 | 0.668073 | 0.666547 |
| 5 | 0.712600 | 0.743018 | 0.692308 | 0.710226 | 0.691384 | 0.686578 |
| 6 | 0.659200 | 0.771312 | 0.670769 | 0.695524 | 0.669198 | 0.662246 |
| 7 | 0.608300 | 0.733821 | 0.680000 | 0.677778 | 0.678871 | 0.677992 |
| 8 | 0.575900 | 0.739905 | 0.701538 | 0.702704 | 0.700902 | 0.698514 |
| 9 | 0.565200 | 0.754889 | 0.692308 | 0.692446 | 0.693058 | 0.691157 |
| 10 | `0.541000` | `0.754683` | `0.704615` | `0.705291` |`0.704209` | `0.702093` |
### Evaluation Results
| Loss | Accuracy | Precision | Recall | F1 Score | Runtime | Samples/s | Steps/s | Epoch |
|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|
| 0.754683 | 0.704615 | 0.705291 | 0.704209 | 0.702093 | 3.3869 | 191.915 | 24.211 | 10.0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
Wazzzabeee/PoliteT5Base
|
Wazzzabeee
| 2023-06-19T22:29:16Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-19T19:30:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: PoliteT5Base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PoliteT5Base
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8536
- Toxicity Ratio: 0.3421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 75
### Training results
| Training Loss | Epoch | Step | Validation Loss | Toxicity Ratio |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|
| No log | 1.0 | 22 | 1.3256 | 0.3070 |
| No log | 2.0 | 44 | 0.8436 | 0.2982 |
| 1.6337 | 3.0 | 66 | 0.7944 | 0.3333 |
| 1.6337 | 4.0 | 88 | 0.8921 | 0.3158 |
| 0.547 | 5.0 | 110 | 0.9630 | 0.2632 |
| 0.547 | 6.0 | 132 | 0.9711 | 0.3158 |
| 0.3279 | 7.0 | 154 | 0.9966 | 0.3070 |
| 0.3279 | 8.0 | 176 | 1.0053 | 0.3246 |
| 0.3279 | 9.0 | 198 | 1.0326 | 0.3333 |
| 0.2282 | 10.0 | 220 | 0.9798 | 0.3158 |
| 0.2282 | 11.0 | 242 | 1.0093 | 0.3333 |
| 0.1837 | 12.0 | 264 | 1.2380 | 0.3246 |
| 0.1837 | 13.0 | 286 | 1.1889 | 0.3860 |
| 0.1546 | 14.0 | 308 | 1.1985 | 0.3596 |
| 0.1546 | 15.0 | 330 | 1.2296 | 0.3509 |
| 0.1178 | 16.0 | 352 | 1.1394 | 0.3684 |
| 0.1178 | 17.0 | 374 | 1.1712 | 0.3596 |
| 0.1178 | 18.0 | 396 | 1.1586 | 0.4035 |
| 0.1185 | 19.0 | 418 | 1.9263 | 0.0789 |
| 0.1185 | 20.0 | 440 | 1.3483 | 0.3246 |
| 0.2332 | 21.0 | 462 | 1.3163 | 0.3158 |
| 0.2332 | 22.0 | 484 | 1.2926 | 0.3509 |
| 0.1267 | 23.0 | 506 | 1.2691 | 0.3421 |
| 0.1267 | 24.0 | 528 | 1.3298 | 0.3596 |
| 0.0879 | 25.0 | 550 | 1.2795 | 0.3509 |
| 0.0879 | 26.0 | 572 | 1.2826 | 0.3246 |
| 0.0879 | 27.0 | 594 | 1.2884 | 0.3158 |
| 0.0747 | 28.0 | 616 | 1.4146 | 0.4035 |
| 0.0747 | 29.0 | 638 | 1.3577 | 0.3596 |
| 0.0714 | 30.0 | 660 | 1.2663 | 0.3509 |
| 0.0714 | 31.0 | 682 | 1.2508 | 0.3772 |
| 0.0566 | 32.0 | 704 | 1.3980 | 0.4035 |
| 0.0566 | 33.0 | 726 | 1.4006 | 0.3860 |
| 0.0566 | 34.0 | 748 | 1.4090 | 0.3596 |
| 0.0572 | 35.0 | 770 | 1.4681 | 0.3246 |
| 0.0572 | 36.0 | 792 | 1.4254 | 0.3947 |
| 0.0456 | 37.0 | 814 | 1.4932 | 0.3246 |
| 0.0456 | 38.0 | 836 | 1.3994 | 0.2982 |
| 0.0385 | 39.0 | 858 | 1.4511 | 0.3421 |
| 0.0385 | 40.0 | 880 | 1.3007 | 0.3684 |
| 0.0223 | 41.0 | 902 | 1.3961 | 0.3158 |
| 0.0223 | 42.0 | 924 | 1.4619 | 0.3246 |
| 0.0223 | 43.0 | 946 | 1.3996 | 0.3246 |
| 0.0199 | 44.0 | 968 | 1.5012 | 0.3509 |
| 0.0199 | 45.0 | 990 | 1.4104 | 0.3246 |
| 0.018 | 46.0 | 1012 | 1.5855 | 0.3333 |
| 0.018 | 47.0 | 1034 | 1.4603 | 0.3333 |
| 0.0146 | 48.0 | 1056 | 1.5335 | 0.3421 |
| 0.0146 | 49.0 | 1078 | 1.4883 | 0.3772 |
| 0.0131 | 50.0 | 1100 | 1.5366 | 0.2982 |
| 0.0131 | 51.0 | 1122 | 1.5762 | 0.3509 |
| 0.0131 | 52.0 | 1144 | 1.5434 | 0.3333 |
| 0.0073 | 53.0 | 1166 | 1.4730 | 0.3158 |
| 0.0073 | 54.0 | 1188 | 1.5133 | 0.3509 |
| 0.0049 | 55.0 | 1210 | 1.6912 | 0.3509 |
| 0.0049 | 56.0 | 1232 | 1.6376 | 0.3509 |
| 0.0028 | 57.0 | 1254 | 1.8260 | 0.3509 |
| 0.0028 | 58.0 | 1276 | 1.5748 | 0.3509 |
| 0.0028 | 59.0 | 1298 | 1.6631 | 0.3509 |
| 0.0029 | 60.0 | 1320 | 1.7458 | 0.3509 |
| 0.0029 | 61.0 | 1342 | 1.6343 | 0.3684 |
| 0.002 | 62.0 | 1364 | 1.6433 | 0.3421 |
| 0.002 | 63.0 | 1386 | 1.7486 | 0.3509 |
| 0.0014 | 64.0 | 1408 | 1.8081 | 0.3684 |
| 0.0014 | 65.0 | 1430 | 1.8987 | 0.3947 |
| 0.0007 | 66.0 | 1452 | 1.8811 | 0.3596 |
| 0.0007 | 67.0 | 1474 | 1.8541 | 0.3596 |
| 0.0007 | 68.0 | 1496 | 1.8233 | 0.3509 |
| 0.001 | 69.0 | 1518 | 1.7747 | 0.3509 |
| 0.001 | 70.0 | 1540 | 1.8105 | 0.3509 |
| 0.0008 | 71.0 | 1562 | 1.8254 | 0.3596 |
| 0.0008 | 72.0 | 1584 | 1.8444 | 0.3684 |
| 0.0008 | 73.0 | 1606 | 1.8387 | 0.3509 |
| 0.0008 | 74.0 | 1628 | 1.8501 | 0.3509 |
| 0.0004 | 75.0 | 1650 | 1.8536 | 0.3421 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
gokuls/hbertv1-Massive-intent_48
|
gokuls
| 2023-06-19T22:21:18Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T22:12:24Z |
---
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: hbertv1-Massive-intent_48
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: validation
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.8573536645351697
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-Massive-intent_48
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8740
- Accuracy: 0.8574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.4348 | 1.0 | 180 | 1.2038 | 0.6798 |
| 1.0006 | 2.0 | 360 | 0.8063 | 0.7831 |
| 0.6914 | 3.0 | 540 | 0.7823 | 0.7924 |
| 0.5 | 4.0 | 720 | 0.8175 | 0.7959 |
| 0.3877 | 5.0 | 900 | 0.7489 | 0.8239 |
| 0.2981 | 6.0 | 1080 | 0.7043 | 0.8446 |
| 0.2251 | 7.0 | 1260 | 0.7596 | 0.8372 |
| 0.181 | 8.0 | 1440 | 0.8237 | 0.8357 |
| 0.1367 | 9.0 | 1620 | 0.8323 | 0.8362 |
| 0.0995 | 10.0 | 1800 | 0.8589 | 0.8396 |
| 0.0726 | 11.0 | 1980 | 0.8476 | 0.8510 |
| 0.0501 | 12.0 | 2160 | 0.8901 | 0.8534 |
| 0.0338 | 13.0 | 2340 | 0.8992 | 0.8519 |
| 0.022 | 14.0 | 2520 | 0.8740 | 0.8574 |
| 0.0124 | 15.0 | 2700 | 0.8828 | 0.8554 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
mrm8488/falcoder-7b
|
mrm8488
| 2023-06-19T22:10:37Z | 29 | 89 |
transformers
|
[
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"generated_from_trainer",
"code",
"coding",
"custom_code",
"dataset:HuggingFaceH4/CodeAlpaca_20K",
"doi:10.57967/hf/0789",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-19T21:26:49Z |
---
tags:
- generated_from_trainer
- code
- coding
model-index:
- name: FalCoder
results: []
license: apache-2.0
language:
- code
thumbnail: https://huggingface.co/mrm8488/falcoder-7b/resolve/main/falcoder.png
datasets:
- HuggingFaceH4/CodeAlpaca_20K
pipeline_tag: text-generation
---
<div style="text-align:center;width:250px;height:250px;">
<img src="https://huggingface.co/mrm8488/falcoder-7b/resolve/main/falcoder.png" alt="falcoder logo"">
</div>
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FalCoder 🦅👩💻
**Falcon-7b** fine-tuned on the **CodeAlpaca 20k instructions dataset** by using the method **QLoRA** with [PEFT](https://github.com/huggingface/peft) library.
## Model description 🧠
[Falcon 7B](https://huggingface.co/tiiuae/falcon-7b)
## Training and evaluation data 📚
[CodeAlpaca_20K](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K): contains 20K instruction-following data used for fine-tuning the Code Alpaca model.
### Training hyperparameters ⚙
TBA
### Training results 🗒️
| Step | Training Loss | Validation Loss |
|------|---------------|-----------------|
| 100 | 0.798500 | 0.767996 |
| 200 | 0.725900 | 0.749880 |
| 300 | 0.669100 | 0.748029 |
| 400 | 0.687300 | 0.742342 |
| 500 | 0.579900 | 0.736735 |
### Example of usage 👩💻
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoTokenizer
model_id = "mrm8488/falcoder-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id).to("cuda")
def generate(
instruction,
max_new_tokens=128,
temperature=0.1,
top_p=0.75,
top_k=40,
num_beams=4,
**kwargs
):
prompt = instruction + "\n### Solution:\n"
print(prompt)
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to("cuda")
attention_mask = inputs["attention_mask"].to("cuda")
generation_config = GenerationConfig(
temperature=temperature,
top_p=top_p,
top_k=top_k,
num_beams=num_beams,
**kwargs,
)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=max_new_tokens,
early_stopping=True
)
s = generation_output.sequences[0]
output = tokenizer.decode(s)
return output.split("### Solution:")[1].lstrip("\n")
instruction = "Design a class for representing a person in Python."
print(generate(instruction))
```
### Citation
```
@misc {manuel_romero_2023,
author = { {Manuel Romero} },
title = { falcoder-7b (Revision e061237) },
year = 2023,
url = { https://huggingface.co/mrm8488/falcoder-7b },
doi = { 10.57967/hf/0789 },
publisher = { Hugging Face }
}
```
|
nolankurylo/IsOperatorClassifier
|
nolankurylo
| 2023-06-19T22:00:03Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T19:12:26Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: nolankurylo/FineTunedHFModel
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nolankurylo/FineTunedHFModel
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0087
- Train Accuracy: 0.9981
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 0.0702 | 0.9780 | 0 |
| 0.0107 | 0.9966 | 1 |
| 0.0087 | 0.9981 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
bsuutari/path_to_saved_model_rafa
|
bsuutari
| 2023-06-19T21:57:03Z | 55 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-19T21:42:23Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of rafa suutari
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - bsuutari/path_to_saved_model_rafa
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of rafa suutari using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Brendan/refpydst-100p-referredstates
|
Brendan
| 2023-06-19T21:49:31Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T21:49:11Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Brendan/refpydst-100p-referredstates-referred-states
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 100% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Brendan/refpydst-100p-referredstates-referred-states')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Brendan/refpydst-100p-referredstates-referred-states')
model = AutoModel.from_pretrained('Brendan/refpydst-100p-referredstates-referred-states')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Brendan/refpydst-100p-referredstates-referred-states)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 45810 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 6,
"evaluation_steps": 15300,
"evaluator": "refpydst.retriever.code.st_evaluator.RetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ducdh1210/dolly-lora-230619-2
|
ducdh1210
| 2023-06-19T21:30:33Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-19T21:30:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
platzi/platzi-vit-model-sandra-rairan
|
platzi
| 2023-06-19T21:14:22Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-17T21:47:28Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: platzi-vit-model-sandra-rairan
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9774436090225563
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-sandra-rairan
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0582
- Accuracy: 0.9774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.143 | 3.85 | 500 | 0.0582 | 0.9774 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
NasimB/distilgpt2-concat
|
NasimB
| 2023-06-19T21:02:23Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-19T18:28:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: distilgpt2-concat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-concat
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7514 | 0.29 | 500 | 5.6224 |
| 5.3454 | 0.58 | 1000 | 5.1814 |
| 4.9931 | 0.87 | 1500 | 4.9290 |
| 4.7222 | 1.16 | 2000 | 4.7811 |
| 4.5672 | 1.45 | 2500 | 4.6657 |
| 4.4669 | 1.74 | 3000 | 4.5721 |
| 4.3738 | 2.02 | 3500 | 4.4939 |
| 4.175 | 2.31 | 4000 | 4.4613 |
| 4.1659 | 2.6 | 4500 | 4.4128 |
| 4.1369 | 2.89 | 5000 | 4.3666 |
| 3.9858 | 3.18 | 5500 | 4.3656 |
| 3.9337 | 3.47 | 6000 | 4.3419 |
| 3.9348 | 3.76 | 6500 | 4.3095 |
| 3.8826 | 4.05 | 7000 | 4.3066 |
| 3.7106 | 4.34 | 7500 | 4.3104 |
| 3.7404 | 4.63 | 8000 | 4.2893 |
| 3.7459 | 4.92 | 8500 | 4.2648 |
| 3.5695 | 5.21 | 9000 | 4.2984 |
| 3.536 | 5.49 | 9500 | 4.2887 |
| 3.5604 | 5.78 | 10000 | 4.2711 |
| 3.5007 | 6.07 | 10500 | 4.2900 |
| 3.3477 | 6.36 | 11000 | 4.3013 |
| 3.3629 | 6.65 | 11500 | 4.2906 |
| 3.3771 | 6.94 | 12000 | 4.2814 |
| 3.211 | 7.23 | 12500 | 4.3131 |
| 3.1938 | 7.52 | 13000 | 4.3124 |
| 3.21 | 7.81 | 13500 | 4.3093 |
| 3.159 | 8.1 | 14000 | 4.3204 |
| 3.0726 | 8.39 | 14500 | 4.3257 |
| 3.0762 | 8.68 | 15000 | 4.3269 |
| 3.0834 | 8.96 | 15500 | 4.3257 |
| 3.0173 | 9.25 | 16000 | 4.3311 |
| 3.0116 | 9.54 | 16500 | 4.3325 |
| 3.0155 | 9.83 | 17000 | 4.3325 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
bsuutari/path_to_saved_model
|
bsuutari
| 2023-06-19T20:58:31Z | 57 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-19T20:49:13Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - bsuutari/path_to_saved_model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Brendan/refpydst-1p-referredstates-split-v3
|
Brendan
| 2023-06-19T20:50:00Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T19:29:58Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Brendan/refpydst-1p-referredstates-split-v3
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 1% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Brendan/refpydst-1p-referredstates-split-v3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Brendan/refpydst-1p-referredstates-split-v3')
model = AutoModel.from_pretrained('Brendan/refpydst-1p-referredstates-split-v3')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Brendan/refpydst-1p-referredstates-split-v3)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 483 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 200,
"evaluator": "refpydst.retriever.code.st_evaluator.RetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Brendan/refpydst-1p-referredstates-split-v1
|
Brendan
| 2023-06-19T20:50:00Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T19:10:31Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Brendan/refpydst-1p-referredstates-split-v1
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 1% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 1% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Brendan/refpydst-1p-referredstates-split-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Brendan/refpydst-1p-referredstates-split-v1')
model = AutoModel.from_pretrained('Brendan/refpydst-1p-referredstates-split-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Brendan/refpydst-1p-referredstates-split-v1)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 437 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 200,
"evaluator": "refpydst.retriever.code.st_evaluator.RetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Brendan/refpydst-1p-referredstates-split-v2
|
Brendan
| 2023-06-19T20:50:00Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T19:29:30Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Brendan/refpydst-1p-referredstates-split-v2
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 1% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Brendan/refpydst-1p-referredstates-split-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Brendan/refpydst-1p-referredstates-split-v2')
model = AutoModel.from_pretrained('Brendan/refpydst-1p-referredstates-split-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Brendan/refpydst-1p-referredstates-split-v2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 435 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 200,
"evaluator": "refpydst.retriever.code.st_evaluator.RetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
IABCD/eduedudiffusion
|
IABCD
| 2023-06-19T20:49:50Z | 30 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-19T19:33:34Z |
---
license: cc-by-nc-nd-4.0
tags:
- text-to-image
- stable-diffusion
---
### EduEduDiffusion0.2 Dreambooth model trained by nicolasdec for EduEdu
Test the concept via [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Training version 0.2.
Positive Prompts: PROMPT, (eduedu) style, illustration, vector, cartoon lighting
Negatives: bad anatomy, ugly, missing arms, bad proportions, tiling, missing legs, blurry, poorly drawn feet, morbid, cloned face, extra limbs, mutated hands, cropped, disfigured, mutation, deformed, deformed, mutilated, dehydrated, body out of frame, out of frame, disfigured, bad anatomy, poorly drawn face, duplicate, cut off, poorly drawn hands, error, low contrast, signature, extra arms, underexposed, text, extra fingers, overexposed, too many fingers, extra legs, bad art, ugly, extra limbs, beginner, username, fused fingers, amateur, watermark, gross proportions, distorted face, worst quality, jpeg artifacts, low quality, malformed limbs, long neck, lowres, poorly Rendered face, low resolution, low saturation, bad composition, Images cut out at the top, left, right, bottom, deformed body features, poorly rendered hands
|
Brendan/refpydst-5p-referredstates-split-v2
|
Brendan
| 2023-06-19T20:49:35Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T19:26:56Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Brendan/refpydst-5p-referredstates-split-v2
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 5% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Brendan/refpydst-5p-referredstates-split-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Brendan/refpydst-5p-referredstates-split-v2')
model = AutoModel.from_pretrained('Brendan/refpydst-5p-referredstates-split-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Brendan/refpydst-5p-referredstates-split-v2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2295 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 800,
"evaluator": "refpydst.retriever.code.st_evaluator.RetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Brendan/refpydst-5p-icdst-split-v3
|
Brendan
| 2023-06-19T20:49:28Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T19:26:23Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Brendan/refpydst-5p-icdst-split-v3
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 5% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Brendan/refpydst-5p-icdst-split-v3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Brendan/refpydst-5p-icdst-split-v3')
model = AutoModel.from_pretrained('Brendan/refpydst-5p-icdst-split-v3')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Brendan/refpydst-5p-icdst-split-v3)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2233 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 800,
"evaluator": "refpydst.retriever.code.st_evaluator.RetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Brendan/refpydst-5p-icdst-split-v2
|
Brendan
| 2023-06-19T20:49:25Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T19:25:30Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Brendan/refpydst-5p-icdst-split-v2
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 5% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Brendan/refpydst-5p-icdst-split-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Brendan/refpydst-5p-icdst-split-v2')
model = AutoModel.from_pretrained('Brendan/refpydst-5p-icdst-split-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Brendan/refpydst-5p-icdst-split-v2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2295 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 800,
"evaluator": "refpydst.retriever.code.st_evaluator.RetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Brendan/refpydst-10p-referredstates-split-v1
|
Brendan
| 2023-06-19T20:49:24Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-19T19:24:28Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Brendan/refpydst-10p-referredstates-split-v1
This model was initialized with `sentence-transformers/all-mpnet-base-v2` and then fine-tuned using a 10% few-shot split of the MultiWOZ dataset and a supervised contrastive loss. It is fine-tuned to be used as an in-context example retriever using this few-shot training set, which is provided in the linked repository. More details available [in the repo](https://github.com/jlab-nlp/RefPyDST) and paper linked within. To cite this model, please consult the citation in the [linked GithHub repository README](https://github.com/jlab-nlp/RefPyDST).
The remainder of this README is automatically generated from `sentence_transformers` and is accurate, though this model is not intended as a general purpose sentence-encoder: it is expecting in-context examples from MultiWOZ to be formatted in a particular way, see the linked repo for details.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Brendan/refpydst-10p-referredstates-split-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Brendan/refpydst-10p-referredstates-split-v1')
model = AutoModel.from_pretrained('Brendan/refpydst-10p-referredstates-split-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Brendan/refpydst-10p-referredstates-split-v1)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4567 with parameters:
```
{'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 1600,
"evaluator": "refpydst.retriever.code.st_evaluator.RetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
mrizalf7/xlm-roberta-finetuned-small-squad-indonesian-rizal-9
|
mrizalf7
| 2023-06-19T20:40:01Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-19T17:28:21Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-finetuned-small-squad-indonesian-rizal-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-finetuned-small-squad-indonesian-rizal-9
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6372 | 1.0 | 4128 | 1.7537 |
| 1.3958 | 2.0 | 8256 | 1.7289 |
| 1.2485 | 3.0 | 12384 | 1.7340 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
cosimoiaia/Loquace-12B
|
cosimoiaia
| 2023-06-19T20:23:47Z | 20 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"alpaca",
"llama",
"llm",
"finetune",
"Italian",
"qlora",
"conversational",
"it",
"dataset:cosimoiaia/Loquace-102k",
"license:cc-by-nc-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-02T20:36:15Z |
---
license: cc-by-nc-2.0
datasets:
- cosimoiaia/Loquace-102k
language:
- it
pipeline_tag: conversational
tags:
- alpaca
- llama
- llm
- finetune
- Italian
- qlora
---
Model Card for Loquace-12B
# 🇮🇹 Loquace-12B 🇮🇹
An exclusively Italian speaking, instruction finetuned, Large Language model. 🇮🇹
The Loquace Italian LLM models are created as a proof-of-concept to evaluate on how language tuning can be achieved using QLoRa by instruct-tunings foundational LLMs
using dataset of a specific language.
The QLoRa (https://github.com/artidoro/qlora) method of fine-tuning significantly lower the resources requirements compared to any other methods available,
this allow to easily execute the process on significanly larger dataset while still using consumers GPUs and still achieve high accuracy.
## Model Description
Loquace-12B is the first 12B italian Large Language Model trained using QLoRa on a large dataset of 102k question/answer pairs
exclusively in Italian.
The related code can be found at:
https://github.com/cosimoiaia/Loquace
Loquace-12B is part of the big Loquace family:
https://huggingface.co/cosimoiaia/Loquace-70m - Based on pythia-70m
https://huggingface.co/cosimoiaia/Loquace-410m - Based on pythia-410m
https://huggingface.co/cosimoiaia/Loquace-7B - Based on Falcon-7B
https://huggingface.co/cosimoiaia/Loquace-12B - Based on pythia-12B
https://huggingface.co/cosimoiaia/Loquace-20B - Based on gpt-neox-20B
## Usage
```python
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
BitsAndBytesConfig
)
tokenizer = AutoTokenizer.from_pretrained("cosimoiaia/Loquace-12B", padding_side="right", use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
"cosimoiaia/Loquace-12B",
load_in_8bit=True,
device_map="auto",
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
llm_int8_has_fp16_weight=False
)
)
```
## Training
Loquace-12B was trained on a conversational dataset comprising 102k question/answer pairs in Italian language.
The training data was constructed by putting together translations from the original alpaca Dataset and other sources like the OpenAssistant dataset.
The model was trained for only 3000 iterations and took 18 hours on 4 RTX 3090, kindly provided by Genesis Cloud. (https://gnsiscld.co/26qhlf)
## Limitations
- Loquace-12B may not handle complex or nuanced queries well and may struggle with ambiguous or poorly formatted inputs.
- The model may generate responses that are factually incorrect or nonsensical. It should be used with caution, and outputs should be carefully verified.
- The training data primarily consists of conversational examples and may not generalize well to other types of tasks or domains.
## Dependencies
- PyTorch
- Transformers library by Hugging Face
- Bitsandbites
- QLoRa
|
cosimoiaia/Loquace-410m
|
cosimoiaia
| 2023-06-19T20:22:44Z | 183 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"alpaca",
"llama",
"llm",
"finetune",
"Italian",
"qlora",
"conversational",
"it",
"dataset:cosimoiaia/Loquace-102k",
"license:cc-by-nc-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-02T05:30:16Z |
---
license: cc-by-nc-2.0
datasets:
- cosimoiaia/Loquace-102k
language:
- it
pipeline_tag: conversational
tags:
- alpaca
- llama
- llm
- finetune
- Italian
- qlora
---
Model Card for Loquace-410m
# 🇮🇹 Loquace-410m 🇮🇹
An exclusively Italian speaking, instruction finetuned, Large Language model. 🇮🇹
The Loquace Italian LLM models are created as a proof-of-concept to evaluate on how language tuning can be achieved using QLoRa by instruct-tunings foundational LLMs
using dataset of a specific language.
The QLoRa (https://github.com/artidoro/qlora) method of fine-tuning significantly lower the resources requirements compared to any other methods available,
this allow to easily execute the process on significanly larger dataset while still using consumers GPUs and still achieve high accuracy.
## Model Description
Loquace-410m is the second smallest model of the Loquace family. It was trained using QLoRa on a large dataset of 102k question/answer pairs
exclusively in Italian using pythia-410m as base.
The related code can be found at: https://github.com/cosimoiaia/Loquace
Loquace-410m is part of the big Loquace family:
https://huggingface.co/cosimoiaia/Loquace-70m - Based on pythia-70m
https://huggingface.co/cosimoiaia/Loquace-410m - Based on pythia-410m
https://huggingface.co/cosimoiaia/Loquace-7B - Based on Falcon-7B.
https://huggingface.co/cosimoiaia/Loquace-12B - Based on pythia-12B
https://huggingface.co/cosimoiaia/Loquace-20B - Based on gpt-neox-20B
## Usage
```python
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
BitsAndBytesConfig
)
tokenizer = AutoTokenizer.from_pretrained("cosimoiaia/Loquace-410m", padding_side="right", use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
"cosimoiaia/Loquace-410m",
load_in_8bit=True,
device_map="auto",
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
llm_int8_has_fp16_weight=False
)
)
```
## Training
Loquace-410m was trained on a conversational dataset comprising 102k question/answer pairs in Italian language.
The training data was constructed by putting together translations from the original alpaca Dataset and other sources like the OpenAssistant dataset.
The model was trained for only 10000 iterations and took 9 hours on a single RTX 3090, kindly provided by Genesis Cloud. (https://gnsiscld.co/26qhlf)
## Limitations
- Loquace-410m may not handle complex or nuanced queries well and may struggle with ambiguous or poorly formatted inputs.
- The model may generate responses that are factually incorrect or nonsensical. It should be used with caution, and outputs should be carefully verified.
- The training data primarily consists of conversational examples and may not generalize well to other types of tasks or domains.
## Dependencies
- PyTorch
- Transformers library by Hugging Face
- Bitsandbites
- QLoRa
|
cosimoiaia/Loquace-70m
|
cosimoiaia
| 2023-06-19T20:21:56Z | 182 | 3 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"alpaca",
"llama",
"llm",
"finetune",
"Italian",
"qlora",
"conversational",
"it",
"dataset:cosimoiaia/Loquace-102k",
"license:cc-by-nc-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-02T05:18:49Z |
---
license: cc-by-nc-2.0
datasets:
- cosimoiaia/Loquace-102k
language:
- it
pipeline_tag: conversational
tags:
- alpaca
- llama
- llm
- finetune
- Italian
- qlora
---
Model Card for Loquace-70m
# 🇮🇹 Loquace-70m 🇮🇹
An exclusively Italian speaking, instruction finetuned, Large Language model. 🇮🇹
The Loquace Italian LLM models are created as a proof-of-concept to evaluate on how language tuning can be achieved using QLoRa by instruct-tunings foundational LLMs
using dataset of a specific language.
The QLoRa (https://github.com/artidoro/qlora) method of fine-tuning significantly lower the resources requirements compared to any other methods available,
this allow to easily execute the process on significanly larger dataset while still using consumers GPUs and still achieve high accuracy.
## Model Description
Loquace-70m is the smallest model of the Loquace family. It was trained using QLoRa on a large dataset of 102k question/answer pairs
exclusively in Italian.
The related code can be found at: https://github.com/cosimoiaia/Loquace
Loquace-70m is part of the big Loquace family:
https://huggingface.co/cosimoiaia/Loquace-70m - Based on pythia-70m
https://huggingface.co/cosimoiaia/Loquace-410m - Based on pythia-410m
https://huggingface.co/cosimoiaia/Loquace-7B - Based on Falcon-7B.
https://huggingface.co/cosimoiaia/Loquace-12B - Based on pythia-12B
https://huggingface.co/cosimoiaia/Loquace-20B - Based on gpt-neox-20B
## Usage
```python
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
BitsAndBytesConfig
)
tokenizer = AutoTokenizer.from_pretrained("cosimoiaia/Loquace-70m", padding_side="right", use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
"cosimoiaia/Loquace-70m",
load_in_8bit=True,
device_map="auto",
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
llm_int8_has_fp16_weight=False
)
)
```
## Training
Loquace-70m was trained on a conversational dataset comprising 102k question/answer pairs in Italian language.
The training data was constructed by putting together translations from the original alpaca Dataset and other sources like the OpenAssistant dataset.
The model was trained for only 10000 iterations and took 6 hours on a single RTX 3090, kindly provided by Genesis Cloud. (https://gnsiscld.co/26qhlf)
## Limitations
- Loquace-70m may not handle complex or nuanced queries well and may struggle with ambiguous or poorly formatted inputs.
- The model may generate responses that are factually incorrect or nonsensical. It should be used with caution, and outputs should be carefully verified.
- The training data primarily consists of conversational examples and may not generalize well to other types of tasks or domains.
## Dependencies
- PyTorch
- Transformers library by Hugging Face
- Bitsandbites
- QLoRa
|
sd-concepts-library/mersh
|
sd-concepts-library
| 2023-06-19T20:08:53Z | 0 | 0 | null |
[
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:mit",
"region:us"
] | null | 2023-06-19T20:08:51Z |
---
license: mit
base_model: stabilityai/stable-diffusion-2
---
### Mersh on Stable Diffusion
This is the `<lolcowmersh>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
Draconis42/q-FrozenLake-v1-4x4-noSlippery
|
Draconis42
| 2023-06-19T19:56:56Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T19:54:15Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Draconis42/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
wesleyacheng/sms-spam-classification-with-bert
|
wesleyacheng
| 2023-06-19T19:39:06Z | 8,660 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:sms_spam",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-22T05:30:59Z |
---
license: apache-2.0
datasets:
- sms_spam
language:
- en
metrics:
- f1
- accuracy
pipeline_tag: text-classification
widget:
- text: +26.787$ burn out in 24 hours, Let it have drowned, http://bit.ly/7ayp
example_title: Spam Example
- text: Hey want to cook something together tonight?
example_title: Ham Example
---
First posted in my [Kaggle](https://www.kaggle.com/code/wesleyacheng/sms-spam-classification-with-bert).
You know what really grinds my gears. Spam! 😤
I made a sms spam classifier using [transfer learning](https://en.wikipedia.org/wiki/Transfer_learning) on [BERT](https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html) with a [Singaporean SMS Spam dataset](https://huggingface.co/datasets/sms_spam).
|
digiplay/kotosmix_diffusers
|
digiplay
| 2023-06-19T19:14:40Z | 340 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-29T08:48:08Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
model info : https://civitai.com/models/5245/kotosmix
you can apply VAE to get better color,
example codes for diffusers:
```
#VAE
from diffusers.models import AutoencoderKL
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse")
modelid="digiplay/kotosmix_diffusers"
pipe = DiffusionPipeline.from_pretrained(modelid, vae=vae)
```
PS: Recommended *Euler* scheduler type.
|
gokuls/hbertv1-Massive-intent
|
gokuls
| 2023-06-19T19:11:18Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T19:02:13Z |
---
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: hbertv1-Massive-intent
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: validation
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.8450565666502705
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hbertv1-Massive-intent
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8959
- Accuracy: 0.8451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.051 | 1.0 | 180 | 1.8409 | 0.4968 |
| 1.3906 | 2.0 | 360 | 1.0234 | 0.7167 |
| 0.8613 | 3.0 | 540 | 0.8787 | 0.7688 |
| 0.6447 | 4.0 | 720 | 0.8405 | 0.7811 |
| 0.4955 | 5.0 | 900 | 0.8426 | 0.7850 |
| 0.3899 | 6.0 | 1080 | 0.7777 | 0.8175 |
| 0.3052 | 7.0 | 1260 | 0.7779 | 0.8175 |
| 0.2413 | 8.0 | 1440 | 0.8294 | 0.8254 |
| 0.196 | 9.0 | 1620 | 0.8265 | 0.8214 |
| 0.1545 | 10.0 | 1800 | 0.8361 | 0.8362 |
| 0.1177 | 11.0 | 1980 | 0.8470 | 0.8288 |
| 0.0894 | 12.0 | 2160 | 0.8706 | 0.8283 |
| 0.0666 | 13.0 | 2340 | 0.8853 | 0.8392 |
| 0.0447 | 14.0 | 2520 | 0.8959 | 0.8451 |
| 0.0312 | 15.0 | 2700 | 0.8982 | 0.8441 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
digiplay/realdosmix_diffusers
|
digiplay
| 2023-06-19T19:06:46Z | 350 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-23T15:17:35Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
hi,I am newbie here, this is a test for transfer RealDosMix model to diffusers
Scheduler type: DPM
Model info:
https://civitai.com/models/6925/realdosmix
|
digiplay/bra_v40_diffusers
|
digiplay
| 2023-06-19T18:59:09Z | 369 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-25T18:57:35Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
https://civitai.com/models/25494/beautiful-realistic-asians
|
greenw0lf/wav2vec2-large-xls-r-1b-frisian-cv-8-10h
|
greenw0lf
| 2023-06-19T18:58:05Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-31T10:06:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_8_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-1b-frisian-cv-8-10h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_8_0
type: common_voice_8_0
config: fy-NL
split: validation
args: fy-NL
metrics:
- name: Wer
type: wer
value: 0.09612912441079846
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_8_0
type: common_voice_8_0
config: fy-NL
split: test
args: fy-NL
metrics:
- name: Wer
type: wer
value: 0.08830755889579418
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-frisian-cv-8-10h
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice_8_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1207
- Wer: 0.0961
And on the test set:
- Wer: 0.0883
## Model description
This model has been developed for my Master's thesis in "Voice Technology" at Rijksuniversiteit Groningen - Campus Fryslân. It corresponds to experiment 3 where
I use as training set 10 hours of Frisian speech randomly selected from all validated data except the test and evaluation sets.
## Intended uses & limitations
The intended use is for recognizing Frisian speech.
Limitations include no LM rescoring and using version 8.0 of Common Voice instead of 13.0.
## Training and evaluation data
The evaluation split used is the one available in the Common Voice 8.0 Frisian subset. The train split is 10 hours of Frisian randomly selected from validated data except for the recordings from test and evaluation splits.
## Training procedure
The script used for training this model can be found in this GitHub repository: [link](https://github.com/greenw0lf/MSc-VT-Thesis/).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.6342 | 1.32 | 300 | 2.9760 | 1.0 |
| 2.2716 | 2.63 | 600 | 0.6877 | 0.6024 |
| 1.1303 | 3.95 | 900 | 0.3522 | 0.3450 |
| 0.9038 | 5.26 | 1200 | 0.2714 | 0.2603 |
| 0.846 | 6.58 | 1500 | 0.2143 | 0.2036 |
| 0.8044 | 7.89 | 1800 | 0.1829 | 0.1788 |
| 0.7069 | 9.21 | 2100 | 0.1751 | 0.1667 |
| 0.6995 | 10.53 | 2400 | 0.1741 | 0.1727 |
| 0.7115 | 11.84 | 2700 | 0.1591 | 0.1486 |
| 0.677 | 13.16 | 3000 | 0.1636 | 0.1459 |
| 0.6032 | 14.47 | 3300 | 0.1535 | 0.1439 |
| 0.6218 | 15.79 | 3600 | 0.1427 | 0.1406 |
| 0.6519 | 17.11 | 3900 | 0.1498 | 0.1488 |
| 0.5739 | 18.42 | 4200 | 0.1438 | 0.1319 |
| 0.567 | 19.74 | 4500 | 0.1379 | 0.1322 |
| 0.4982 | 21.05 | 4800 | 0.1315 | 0.1237 |
| 0.5825 | 22.37 | 5100 | 0.1349 | 0.1252 |
| 0.5085 | 23.68 | 5400 | 0.1297 | 0.1233 |
| 0.4946 | 25.0 | 5700 | 0.1343 | 0.1127 |
| 0.5677 | 26.32 | 6000 | 0.1323 | 0.1228 |
| 0.4858 | 27.63 | 6300 | 0.1292 | 0.1098 |
| 0.4709 | 28.95 | 6600 | 0.1267 | 0.1204 |
| 0.3241 | 30.26 | 6900 | 0.1315 | 0.1274 |
| 0.2796 | 31.58 | 7200 | 0.1315 | 0.1202 |
| 0.3171 | 32.89 | 7500 | 0.1315 | 0.1200 |
| 0.2591 | 34.21 | 7800 | 0.1322 | 0.1106 |
| 0.2716 | 35.53 | 8100 | 0.1233 | 0.1030 |
| 0.2446 | 36.84 | 8400 | 0.1273 | 0.1087 |
| 0.2377 | 38.16 | 8700 | 0.1243 | 0.1101 |
| 0.2183 | 39.47 | 9000 | 0.1230 | 0.1116 |
| 0.2059 | 40.79 | 9300 | 0.1240 | 0.1001 |
| 0.1916 | 42.11 | 9600 | 0.1223 | 0.1003 |
| 0.196 | 43.42 | 9900 | 0.1246 | 0.0965 |
| 0.1969 | 44.74 | 10200 | 0.1222 | 0.1038 |
| 0.1951 | 46.05 | 10500 | 0.1208 | 0.1003 |
| 0.1809 | 47.37 | 10800 | 0.1213 | 0.1003 |
| 0.1793 | 48.68 | 11100 | 0.1202 | 0.0959 |
| 0.1837 | 50.0 | 11400 | 0.1207 | 0.0961 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
fedbor/secondo_modello
|
fedbor
| 2023-06-19T18:55:41Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-19T18:55:40Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
amangarg98/my_awesome_model
|
amangarg98
| 2023-06-19T18:51:53Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T18:40:56Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: amangarg98/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# amangarg98/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0266
- Validation Loss: 0.0126
- Train Accuracy: 0.9953
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3492, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.0266 | 0.0126 | 0.9953 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
MUmairAB/English_to_French_Translation_Transformer
|
MUmairAB
| 2023-06-19T18:46:14Z | 1 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-06-18T08:50:01Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | RMSprop |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | 100 |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| rho | 0.9 |
| momentum | 0.0 |
| epsilon | 1e-07 |
| centered | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
hopkins/ss-10k
|
hopkins
| 2023-06-19T18:19:03Z | 144 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-19T18:07:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: ss-10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ss-10k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 5.8726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 18
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.1881 | 15.38 | 200 | 5.8726 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mun33b/dqn-SpaceInvadersNoFrameskip-v4
|
mun33b
| 2023-06-19T18:14:14Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T15:53:18Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 523.50 +/- 90.11
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mun33b -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mun33b -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mun33b
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 2000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
UnaiGurbindo/ppo-LunarLander-v2
|
UnaiGurbindo
| 2023-06-19T18:13:47Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T18:13:22Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.59 +/- 20.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SlyEcho/open_llama_13b_ggml
|
SlyEcho
| 2023-06-19T17:56:45Z | 0 | 5 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-06-07T16:18:38Z |
---
license: apache-2.0
---
# ggml versions of OpenLLaMa 13B
For use with [llama.cpp](https://github.com/ggerganov/llama.cpp).
- Version: 1000B tokens final release
- Project: [OpenLLaMA: An Open Reproduction of LLaMA](https://github.com/openlm-research/open_llama)
- Model: [openlm-research/open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)
- llama.cpp 4,5,8-bit quantization: build 567(2d5db48) or later
- llama.cpp newer quantization formats: build 616(99009e7) or later
|
hassansoliman/falcon-40b-qlora-utterance-adaptations_v3
|
hassansoliman
| 2023-06-19T17:52:06Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-19T17:51:56Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
ABAtanasov/q-FrozenLake-v1-4x4-noSlippery
|
ABAtanasov
| 2023-06-19T17:46:40Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T17:46:37Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ABAtanasov/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
fedbor/primo_modello
|
fedbor
| 2023-06-19T17:40:27Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-19T17:40:25Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
mrm8488/falcon-7b-ft-codeAlpaca_20k
|
mrm8488
| 2023-06-19T17:35:58Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2023-06-19T14:46:27Z |
---
tags:
- generated_from_trainer
model-index:
- name: falcon-7b-ft-codeAlpaca_20k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-ft-codeAlpaca_20k
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7623 | 0.18 | 50 | 0.7899 |
| 0.7985 | 0.35 | 100 | 0.7680 |
| 0.7551 | 0.53 | 150 | 0.7570 |
| 0.7261 | 0.71 | 200 | 0.7499 |
| 0.8184 | 0.89 | 250 | 0.7461 |
| 0.7067 | 1.06 | 300 | 0.7480 |
| 0.6801 | 1.24 | 350 | 0.7463 |
| 0.6432 | 1.42 | 400 | 0.7423 |
| 0.7141 | 1.6 | 450 | 0.7398 |
| 0.669 | 1.77 | 500 | 0.7383 |
| 0.7177 | 1.95 | 550 | 0.7342 |
| 0.6419 | 2.13 | 600 | 0.7553 |
| 0.6395 | 2.3 | 650 | 0.7510 |
| 0.6255 | 2.48 | 700 | 0.7498 |
| 0.5556 | 2.66 | 750 | 0.7474 |
| 0.6592 | 2.84 | 800 | 0.7470 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
hungngo04/cluster_to_text
|
hungngo04
| 2023-06-19T17:28:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-19T16:06:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: cluster_to_text
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cluster_to_text
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0608
- Bleu: 39.5087
- Gen Len: 10.2429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.8864 | 1.0 | 4678 | 1.5653 | 17.9224 | 10.3526 |
| 1.6271 | 2.0 | 9356 | 1.3336 | 26.9113 | 10.2905 |
| 1.4621 | 3.0 | 14034 | 1.1952 | 32.9922 | 10.2873 |
| 1.3908 | 4.0 | 18712 | 1.1183 | 36.6438 | 10.2917 |
| 1.3385 | 5.0 | 23390 | 1.0753 | 38.768 | 10.2479 |
| 1.3138 | 6.0 | 28068 | 1.0608 | 39.5087 | 10.2429 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
elberaguilar/finetuning-sentiment-model-3000-samples
|
elberaguilar
| 2023-06-19T16:43:11Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T04:20:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1583
- Accuracy: 0.9493
- F1: 0.9676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
hongrui/mammogtram_v_1_1
|
hongrui
| 2023-06-19T16:39:13Z | 2 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-14T15:09:48Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - hongrui/mammogram_v_1
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the hongrui/mammogram_v_1 dataset. You can find some example images in the following.




|
sevdeawesome/Taxi-v3
|
sevdeawesome
| 2023-06-19T16:35:26Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T16:33:58Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.78
name: mean_reward
verified: false
---
|
HyunjooCheong/my_awesome_eli5_clm-model
|
HyunjooCheong
| 2023-06-19T16:35:17Z | 217 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-13T09:11:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8556 | 1.0 | 1131 | 3.7857 |
| 3.7657 | 2.0 | 2262 | 3.7707 |
| 3.7226 | 3.0 | 3393 | 3.7693 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Narotomaki/kimihimee
|
Narotomaki
| 2023-06-19T16:30:55Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-11T14:34:16Z |
---
license: creativeml-openrail-m
---
|
hts98/wav2vec2-common_voice-tr-mms-demo
|
hts98
| 2023-06-19T16:09:25Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"vi",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-18T08:21:21Z |
---
language:
- vi
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: wav2vec2-common_voice-tr-mms-demo
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: COMMON_VOICE - VI
type: common_voice
config: vi
split: test
args: 'Config: vi, Training split: train, Eval split: test'
metrics:
- name: Wer
type: wer
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tr-mms-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - VI dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5581
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| No log | 1.79 | 100 | 3.6345 | 1.0 |
| No log | 3.57 | 200 | 3.6709 | 1.0 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.7.0
- Tokenizers 0.13.3
|
hungngo04/my_awesome_opus_books_model
|
hungngo04
| 2023-06-19T16:05:35Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-13T07:22:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9985
- Bleu: 6.0773
- Gen Len: 10.9877
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 2.4478 | 1.0 | 4678 | 2.1576 | 3.7548 | 11.3567 |
| 2.2537 | 2.0 | 9356 | 1.9985 | 6.0773 | 10.9877 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
NasimB/bert-dp-second
|
NasimB
| 2023-06-19T16:04:43Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:generator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-18T09:08:18Z |
---
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: bert-dp-second
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-dp-second
This model is a fine-tuned version of [](https://huggingface.co/) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 19
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 7.3416 | 0.23 | 500 | 6.6532 |
| 6.5752 | 0.47 | 1000 | 6.5275 |
| 6.4866 | 0.7 | 1500 | 6.4720 |
| 6.4273 | 0.93 | 2000 | 6.4540 |
| 6.4036 | 1.17 | 2500 | 6.4236 |
| 6.3779 | 1.4 | 3000 | 6.4018 |
| 6.3528 | 1.63 | 3500 | 6.3768 |
| 6.3258 | 1.87 | 4000 | 6.3679 |
| 6.3009 | 2.1 | 4500 | 6.3305 |
| 6.2646 | 2.33 | 5000 | 6.3142 |
| 6.2583 | 2.57 | 5500 | 6.3004 |
| 6.2223 | 2.8 | 6000 | 6.2605 |
| 6.1941 | 3.03 | 6500 | 6.2353 |
| 6.1382 | 3.27 | 7000 | 6.2095 |
| 6.1301 | 3.5 | 7500 | 6.1774 |
| 6.09 | 3.73 | 8000 | 6.1480 |
| 6.0624 | 3.97 | 8500 | 6.1061 |
| 6.0056 | 4.2 | 9000 | 6.0655 |
| 5.9444 | 4.43 | 9500 | 5.9461 |
| 5.7101 | 4.67 | 10000 | 5.2594 |
| 5.005 | 4.9 | 10500 | 4.7348 |
| 4.6127 | 5.13 | 11000 | 4.4626 |
| 4.3907 | 5.37 | 11500 | 4.2862 |
| 4.241 | 5.6 | 12000 | 4.1701 |
| 4.1286 | 5.83 | 12500 | 4.0673 |
| 4.0151 | 6.07 | 13000 | 3.9967 |
| 3.934 | 6.3 | 13500 | 3.9292 |
| 3.8789 | 6.53 | 14000 | 3.8707 |
| 3.8231 | 6.77 | 14500 | 3.8222 |
| 3.7696 | 7.0 | 15000 | 3.7800 |
| 3.7078 | 7.23 | 15500 | 3.7424 |
| 3.6671 | 7.47 | 16000 | 3.7093 |
| 3.6446 | 7.7 | 16500 | 3.6780 |
| 3.6069 | 7.93 | 17000 | 3.6476 |
| 3.5782 | 8.17 | 17500 | 3.6283 |
| 3.5384 | 8.4 | 18000 | 3.6098 |
| 3.5245 | 8.63 | 18500 | 3.5942 |
| 3.5209 | 8.87 | 19000 | 3.5841 |
| 3.4948 | 9.1 | 19500 | 3.5728 |
| 3.4877 | 9.33 | 20000 | 3.5692 |
| 3.4818 | 9.57 | 20500 | 3.5641 |
| 3.4844 | 9.8 | 21000 | 3.5640 |
| 3.5323 | 10.03 | 21500 | 3.6026 |
| 3.5123 | 10.27 | 22000 | 3.5877 |
| 3.5046 | 10.5 | 22500 | 3.5595 |
| 3.4787 | 10.73 | 23000 | 3.5403 |
| 3.4568 | 10.97 | 23500 | 3.5125 |
| 3.4154 | 11.2 | 24000 | 3.4916 |
| 3.3998 | 11.43 | 24500 | 3.4749 |
| 3.3986 | 11.67 | 25000 | 3.4578 |
| 3.372 | 11.9 | 25500 | 3.4405 |
| 3.3402 | 12.13 | 26000 | 3.4317 |
| 3.3281 | 12.37 | 26500 | 3.4215 |
| 3.322 | 12.6 | 27000 | 3.4093 |
| 3.3198 | 12.83 | 27500 | 3.4026 |
| 3.3039 | 13.07 | 28000 | 3.3971 |
| 3.296 | 13.3 | 28500 | 3.3954 |
| 3.3015 | 13.53 | 29000 | 3.3954 |
| 3.2939 | 13.77 | 29500 | 3.3927 |
| 3.3013 | 14.0 | 30000 | 3.3918 |
| 3.343 | 14.23 | 30500 | 3.4265 |
| 3.3438 | 14.47 | 31000 | 3.4133 |
| 3.3397 | 14.7 | 31500 | 3.3951 |
| 3.3156 | 14.93 | 32000 | 3.3681 |
| 3.2815 | 15.17 | 32500 | 3.3503 |
| 3.2654 | 15.4 | 33000 | 3.3313 |
| 3.2492 | 15.63 | 33500 | 3.3184 |
| 3.2399 | 15.87 | 34000 | 3.2995 |
| 3.2222 | 16.1 | 34500 | 3.2922 |
| 3.2026 | 16.33 | 35000 | 3.2818 |
| 3.191 | 16.57 | 35500 | 3.2723 |
| 3.1825 | 16.8 | 36000 | 3.2640 |
| 3.1691 | 17.03 | 36500 | 3.2530 |
| 3.1656 | 17.27 | 37000 | 3.2487 |
| 3.1487 | 17.5 | 37500 | 3.2419 |
| 3.1635 | 17.73 | 38000 | 3.2411 |
| 3.1675 | 17.97 | 38500 | 3.2330 |
| 3.1422 | 18.2 | 39000 | 3.2344 |
| 3.1443 | 18.43 | 39500 | 3.2331 |
| 3.1425 | 18.67 | 40000 | 3.2348 |
| 3.139 | 18.9 | 40500 | 3.2321 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Lipov91/mt5-small-finetuned-geodescriptions
|
Lipov91
| 2023-06-19T15:51:17Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-19T15:49:37Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Lipov91/mt5-small-finetuned-geodescriptions
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Lipov91/mt5-small-finetuned-geodescriptions
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 29.3069
- Validation Loss: 14.6929
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 8, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 29.3069 | 14.6929 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Noahhow/Gragas
|
Noahhow
| 2023-06-19T15:47:32Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"Lol",
"League of legends ",
"audio-to-audio",
"en",
"dataset:tiiuae/falcon-refinedweb",
"license:creativeml-openrail-m",
"region:us"
] |
audio-to-audio
| 2023-06-19T15:38:07Z |
---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
metrics:
- charcut_mt
pipeline_tag: audio-to-audio
tags:
- Lol
- 'League of legends '
license: creativeml-openrail-m
library_name: adapter-transformers
---
|
CodyKilpatrick/a2c-AntBulletEnv-v0
|
CodyKilpatrick
| 2023-06-19T15:40:22Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T15:37:59Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1212.25 +/- 179.65
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
andrewsiah/q-FrozenLake-v1-4x4-noSlippery
|
andrewsiah
| 2023-06-19T15:16:27Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T15:16:21Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="andrewsiah/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Heng666/falcon-7b-sharded-bf16-english-quote-qlora
|
Heng666
| 2023-06-19T15:10:33Z | 5 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-19T15:05:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
Keithulu/distilgpt2-finetuned-python-stack
|
Keithulu
| 2023-06-19T15:02:19Z | 213 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-19T14:49:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-python-stack
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-python-stack
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 91 | 3.1229 |
| No log | 2.0 | 182 | 2.9666 |
| No log | 3.0 | 273 | 2.9321 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
rosadecsai/distilbert-base-uncased-finetuned-emotion
|
rosadecsai
| 2023-06-19T14:59:36Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T11:23:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9223397880179345
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2210
- Accuracy: 0.9225
- F1: 0.9223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8335 | 1.0 | 250 | 0.3278 | 0.8985 | 0.8937 |
| 0.2523 | 2.0 | 500 | 0.2210 | 0.9225 | 0.9223 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
sarahpuspdew/DeepRLCourse_Unit6-a2c-AntBulletEnv-v0
|
sarahpuspdew
| 2023-06-19T14:55:25Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T14:54:23Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1322.86 +/- 745.96
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
teddy0413/Accounting_glm0619
|
teddy0413
| 2023-06-19T14:55:02Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-19T14:54:58Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
syf2023/gpt2
|
syf2023
| 2023-06-19T14:53:15Z | 203 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"safetensors",
"gpt2",
"text-generation",
"exbert",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-19T14:49:39Z |
---
language: en
tags:
- exbert
license: mit
duplicated_from: gpt2
---
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
This is the **smallest** version of GPT-2, with 124M parameters.
**Related Models:** [GPT-Large](https://huggingface.co/gpt2-large), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
xusenlin/duee-gplinker
|
xusenlin
| 2023-06-19T14:53:10Z | 36 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"event extraction",
"zh",
"dataset:DuEE",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-19T14:22:12Z |
---
language:
- zh
tags:
- event extraction
license: apache-2.0
datasets:
- DuEE
metrics:
- f1
---
# GPLinker事件抽取模型
## 模型介绍
+ 数据集:百度 `DUEE` 信息抽取
+ 模型方法:[GPLinker:基于GlobalPointer的事件联合抽取](https://spaces.ac.cn/archives/8926)
## 使用方法
```commandline
pip install litie
```
```python
from pprint import pprint
from litie.pipelines import EventExtractionPipeline
pipeline = EventExtractionPipeline("gplinker", model_name_or_path="xusenlin/duee-gplinker", model_type="bert")
text = "油服巨头哈里伯顿裁员650人 因美国油气开采活动放缓。"
pprint(pipeline(text))
# 输出
[
[
{
"event_type": "组织关系-裁员",
"arguments": [
{
"role": "裁员人数",
"argument": "650人"
},
{
"role": "裁员方",
"argument": "油服巨头哈里伯顿"
}
]
}
]
]
```
模型训练和推理的详细代码见 [litie](https://github.com/xusenlinzy/lit-ie)
|
casque/YOZORA.vae
|
casque
| 2023-06-19T14:50:25Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-19T14:43:18Z |
---
license: creativeml-openrail-m
---
|
titanicc/titanicdrpt
|
titanicc
| 2023-06-19T14:49:59Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-classification",
"arxiv:2009.06978",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T14:38:43Z |
---
duplicated_from: microsoft/DialogRPT-human-vs-rand
---
# Demo
Please try this [➤➤➤ Colab Notebook Demo (click me!)](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing)
| Context | Response | `human_vs_rand` score |
| :------ | :------- | :------------: |
| I love NLP! | He is a great basketball player. | 0.027 |
| I love NLP! | Can you tell me how it works? | 0.754 |
| I love NLP! | Me too! | 0.631 |
The `human_vs_rand` score predicts how likely the response is corresponding to the given context, rather than a random response.
# DialogRPT-human-vs-rand
### Dialog Ranking Pretrained Transformers
> How likely a dialog response is upvoted 👍 and/or gets replied 💬?
This is what [**DialogRPT**](https://github.com/golsun/DialogRPT) is learned to predict.
It is a set of dialog response ranking models proposed by [Microsoft Research NLP Group](https://www.microsoft.com/en-us/research/group/natural-language-processing/) trained on 100 + millions of human feedback data.
It can be used to improve existing dialog generation model (e.g., [DialoGPT](https://huggingface.co/microsoft/DialoGPT-medium)) by re-ranking the generated response candidates.
Quick Links:
* [EMNLP'20 Paper](https://arxiv.org/abs/2009.06978/)
* [Dataset, training, and evaluation](https://github.com/golsun/DialogRPT)
* [Colab Notebook Demo](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing)
We considered the following tasks and provided corresponding pretrained models.
|Task | Description | Pretrained model |
| :------------- | :----------- | :-----------: |
| **Human feedback** | **given a context and its two human responses, predict...**|
| `updown` | ... which gets more upvotes? | [model card](https://huggingface.co/microsoft/DialogRPT-updown) |
| `width`| ... which gets more direct replies? | [model card](https://huggingface.co/microsoft/DialogRPT-width) |
| `depth`| ... which gets longer follow-up thread? | [model card](https://huggingface.co/microsoft/DialogRPT-depth) |
| **Human-like** (human vs fake) | **given a context and one human response, distinguish it with...** |
| `human_vs_rand`| ... a random human response | this model |
| `human_vs_machine`| ... a machine generated response | [model card](https://huggingface.co/microsoft/DialogRPT-human-vs-machine) |
### Contact:
Please create an issue on [our repo](https://github.com/golsun/DialogRPT)
### Citation:
```
@inproceedings{gao2020dialogrpt,
title={Dialogue Response RankingTraining with Large-Scale Human Feedback Data},
author={Xiang Gao and Yizhe Zhang and Michel Galley and Chris Brockett and Bill Dolan},
year={2020},
booktitle={EMNLP}
}
```
|
ghze/Taxi-v3
|
ghze
| 2023-06-19T14:47:57Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T23:05:08Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ghze/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sixkiller/sixkiller
|
sixkiller
| 2023-06-19T14:47:38Z | 6 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-30T11:55:22Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.66 +/- 11.17
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ditrip/rl_course_vizdoom_health_gathering_supreme
|
Ditrip
| 2023-06-19T14:47:23Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T14:35:11Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.30 +/- 5.13
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Ditrip/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
projecte-aina/roberta-base-ca-v2-cawikitc
|
projecte-aina
| 2023-06-19T14:36:45Z | 118 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"zero-shot",
"zero-shot-classification",
"ca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2023-05-19T14:32:53Z |
---
pipeline_tag: zero-shot-classification
license: apache-2.0
language:
- ca
tags:
- zero-shot
- text-classification
widget:
- text: "'Tierra firme' de Marqués-Marcet inaugura el Festival de cinema de Guadalajara amb Catalunya com a convidada d'honor. El director del film afirma sentir-se orgullós de formar part d'aquesta nova generació de cineastes catalans amb moltes dones directores."
candidate_labels: societat, política, cultura, economia
multi_class: true
hypothesis_template: Aquest article tracta sobre {}.
---
# RoBERTa-ca-CaWikiTC
## Overview
<details>
<summary>Click to expand</summary>
- **Model type:** Language Model
- **Architecture:** RoBERTa-base
- **Language:** Catalan
- **License:** Apache 2.0
- **Task:** Zero-Shot Text Classification
- **Data:** CaWikiTC
</details>
## Model description
The **roberta-base-ca-v2-cawikitc** (RoBERTa-ca-CaWikiTC) is a Zero-Shot Text Classification model in Catalan created by fine-tuning [RoBERTa-base-ca-v2](https://huggingface.co/projecte-aina/roberta-large-ca-v2) with a classification dataset, CaWikiTC, reformulated as entailment. This model was developed as part of the experimental research presented in the following paper ["Entailment-based Task Transfer for Catalan Text Classification in Small Data Regimes"]().
## Intended uses and limitations
This model can be used for zero-shot text classification in Catalan. It has been trained with a fixed hypothesis template, "Aquest article tracta sobre {}.", and Wikipedia-based articles as premises, and may not generalize well for all use cases.
## How to use
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="ibaucells/RoBERTa-ca-CaWikiTC")
sentence = "'Tierra firme' de Marqués-Marcet inaugura el Festival de cinema de Guadalajara amb Catalunya com a convidada d'honor. El director del film afirma sentir-se orgullós de formar part d'aquesta nova generació de cineastes catalans amb moltes dones directores."
candidate_labels = ["societat", "política", "cultura", "economia"]
template = "Aquest article tracta sobre {}."
output = classifier(sentence, candidate_labels, hypothesis_template=template, multi_label=False)
print(output)
print(f'Predicted class: {output["labels"][0]}')
```
## Limitations and bias
No measures have been taken to estimate the bias and toxicity embedded in the model.
## Training
### Training data
This model was fine-tuned for the Natural Language Inference (NLI) task on an authomatically Wikipedia-based text classification dataset, [CaWikiTC](https://huggingface.co/ibaucells/CaWikiTC), reformulated as entailment. In the reformulation process, we generated two NLI examples for each text classification instance (text and label): an entailment example and a non-entailment example. In both cases, we employed the text as the premise and utilized a shared template to create the hypothesis ("Aquest article tracta {}."), which was completed with the correct label for the entailment example and a randomly-selected label from the remaining options for the non-entailment example.
### Training procedure
The pre-trained Catalan model [RoBERTa-base-ca-v2](https://huggingface.co/projecte-aina/roberta-large-ca-v2) was fine-tuned with the training data using a learning rate of 3e-5, a batch size of 16, seed 26 and a maximum of 10 epochs. The development set (converted into entailment) was used to select the best checkpoint according to the highest weighted F1 score in the classification task, which was obtained in the first epoch.
## Evaluation
### Evaluation results
This model was evaluated for the TeCla zero-shot text classification task (without specific fine-tuning for the task) and obtained weighted F1 scores of 75.0 in the coarse-grained task (4 classes) and 49.1 in the fine-grained task (53 classes).
## Additional information
### Contact
For further information, send an email to either <[email protected]>.
### License
This work is distributed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Citation
### Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models be liable for any results arising from the use made by third parties of these models.
</details>
|
Saed2023/lilt-en-funsd
|
Saed2023
| 2023-06-19T14:34:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"lilt",
"token-classification",
"generated_from_trainer",
"dataset:funsd-layoutlmv3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-04T16:06:35Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- funsd-layoutlmv3
model-index:
- name: lilt-en-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lilt-en-funsd
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8784
- Answer: {'precision': 0.8651817116060961, 'recall': 0.9033047735618115, 'f1': 0.8838323353293414, 'number': 817}
- Header: {'precision': 0.6504854368932039, 'recall': 0.5630252100840336, 'f1': 0.6036036036036037, 'number': 119}
- Question: {'precision': 0.9073394495412844, 'recall': 0.9182915506035283, 'f1': 0.912782648823258, 'number': 1077}
- Overall Precision: 0.8768
- Overall Recall: 0.8912
- Overall F1: 0.8840
- Overall Accuracy: 0.7948
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4369 | 10.53 | 200 | 0.9022 | {'precision': 0.8049065420560748, 'recall': 0.8433292533659731, 'f1': 0.8236700537955769, 'number': 817} | {'precision': 0.5317460317460317, 'recall': 0.5630252100840336, 'f1': 0.5469387755102041, 'number': 119} | {'precision': 0.8837420526793823, 'recall': 0.903435468895079, 'f1': 0.8934802571166208, 'number': 1077} | 0.8301 | 0.8589 | 0.8442 | 0.7888 |
| 0.047 | 21.05 | 400 | 1.3222 | {'precision': 0.8382526564344747, 'recall': 0.8690330477356181, 'f1': 0.8533653846153846, 'number': 817} | {'precision': 0.5447761194029851, 'recall': 0.6134453781512605, 'f1': 0.5770750988142292, 'number': 119} | {'precision': 0.8667866786678667, 'recall': 0.8941504178272981, 'f1': 0.8802559414990858, 'number': 1077} | 0.8346 | 0.8674 | 0.8507 | 0.7837 |
| 0.015 | 31.58 | 600 | 1.4745 | {'precision': 0.8549528301886793, 'recall': 0.8873929008567931, 'f1': 0.8708708708708709, 'number': 817} | {'precision': 0.5867768595041323, 'recall': 0.5966386554621849, 'f1': 0.5916666666666667, 'number': 119} | {'precision': 0.8755635707844905, 'recall': 0.9015784586815228, 'f1': 0.888380603842635, 'number': 1077} | 0.8503 | 0.8778 | 0.8638 | 0.7969 |
| 0.0051 | 42.11 | 800 | 1.5719 | {'precision': 0.8768472906403941, 'recall': 0.8714810281517748, 'f1': 0.8741559238796808, 'number': 817} | {'precision': 0.5736434108527132, 'recall': 0.6218487394957983, 'f1': 0.596774193548387, 'number': 119} | {'precision': 0.8794326241134752, 'recall': 0.9210770659238626, 'f1': 0.8997732426303855, 'number': 1077} | 0.8594 | 0.8833 | 0.8711 | 0.7923 |
| 0.0041 | 52.63 | 1000 | 1.6771 | {'precision': 0.8352402745995423, 'recall': 0.8935128518971848, 'f1': 0.8633944411590775, 'number': 817} | {'precision': 0.6568627450980392, 'recall': 0.5630252100840336, 'f1': 0.6063348416289592, 'number': 119} | {'precision': 0.8865116279069768, 'recall': 0.8848653667595172, 'f1': 0.8856877323420075, 'number': 1077} | 0.8532 | 0.8693 | 0.8612 | 0.7877 |
| 0.0039 | 63.16 | 1200 | 1.6064 | {'precision': 0.8609112709832134, 'recall': 0.8788249694002448, 'f1': 0.8697758933979407, 'number': 817} | {'precision': 0.6106194690265486, 'recall': 0.5798319327731093, 'f1': 0.5948275862068966, 'number': 119} | {'precision': 0.8897777777777778, 'recall': 0.9294336118848654, 'f1': 0.9091734786557675, 'number': 1077} | 0.8629 | 0.8882 | 0.8754 | 0.8009 |
| 0.0019 | 73.68 | 1400 | 1.7674 | {'precision': 0.8533178114086146, 'recall': 0.8971848225214198, 'f1': 0.8747016706443913, 'number': 817} | {'precision': 0.5769230769230769, 'recall': 0.5042016806722689, 'f1': 0.5381165919282511, 'number': 119} | {'precision': 0.8842676311030742, 'recall': 0.9080779944289693, 'f1': 0.8960146587265231, 'number': 1077} | 0.8560 | 0.8798 | 0.8677 | 0.7981 |
| 0.0007 | 84.21 | 1600 | 1.8380 | {'precision': 0.8469387755102041, 'recall': 0.9143206854345165, 'f1': 0.8793407886992348, 'number': 817} | {'precision': 0.6017699115044248, 'recall': 0.5714285714285714, 'f1': 0.5862068965517241, 'number': 119} | {'precision': 0.8931159420289855, 'recall': 0.9155060352831941, 'f1': 0.9041723979825768, 'number': 1077} | 0.8580 | 0.8947 | 0.8760 | 0.7931 |
| 0.0007 | 94.74 | 1800 | 1.8108 | {'precision': 0.8600478468899522, 'recall': 0.8800489596083231, 'f1': 0.8699334543254689, 'number': 817} | {'precision': 0.6435643564356436, 'recall': 0.5462184873949579, 'f1': 0.5909090909090908, 'number': 119} | {'precision': 0.8722849695916595, 'recall': 0.9322191272051996, 'f1': 0.9012567324955117, 'number': 1077} | 0.8563 | 0.8882 | 0.8720 | 0.7887 |
| 0.0004 | 105.26 | 2000 | 1.9035 | {'precision': 0.8627906976744186, 'recall': 0.9082007343941249, 'f1': 0.8849135360763267, 'number': 817} | {'precision': 0.6285714285714286, 'recall': 0.5546218487394958, 'f1': 0.5892857142857143, 'number': 119} | {'precision': 0.8955495004541326, 'recall': 0.9155060352831941, 'f1': 0.9054178145087237, 'number': 1077} | 0.8683 | 0.8912 | 0.8796 | 0.7965 |
| 0.0002 | 115.79 | 2200 | 1.8784 | {'precision': 0.8651817116060961, 'recall': 0.9033047735618115, 'f1': 0.8838323353293414, 'number': 817} | {'precision': 0.6504854368932039, 'recall': 0.5630252100840336, 'f1': 0.6036036036036037, 'number': 119} | {'precision': 0.9073394495412844, 'recall': 0.9182915506035283, 'f1': 0.912782648823258, 'number': 1077} | 0.8768 | 0.8912 | 0.8840 | 0.7948 |
| 0.0002 | 126.32 | 2400 | 1.9075 | {'precision': 0.8640093786635404, 'recall': 0.9020807833537332, 'f1': 0.8826347305389222, 'number': 817} | {'precision': 0.6296296296296297, 'recall': 0.5714285714285714, 'f1': 0.5991189427312775, 'number': 119} | {'precision': 0.9041970802919708, 'recall': 0.9201485608170845, 'f1': 0.9121030832949838, 'number': 1077} | 0.8731 | 0.8922 | 0.8826 | 0.7959 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gilang21/Anggun
|
gilang21
| 2023-06-19T14:24:54Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-19T14:20:12Z |
---
license: creativeml-openrail-m
---
|
gokuls/add_bert_12_layer_model_complete_training_new_120
|
gokuls
| 2023-06-19T13:51:17Z | 50 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-18T13:24:31Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: add_bert_12_layer_model_complete_training_new_120
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# add_bert_12_layer_model_complete_training_new_120
This model is a fine-tuned version of [gokuls/add_bert_12_layer_model_complete_training_new_96](https://huggingface.co/gokuls/add_bert_12_layer_model_complete_training_new_96) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.2619
- Accuracy: 0.2063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 5.4864 | 0.08 | 10000 | 5.4102 | 0.1898 |
| 5.4838 | 0.16 | 20000 | 5.3944 | 0.1919 |
| 5.2956 | 0.25 | 30000 | 5.3816 | 0.1933 |
| 5.418 | 0.33 | 40000 | 5.3667 | 0.1948 |
| 5.3825 | 0.41 | 50000 | 5.3490 | 0.1968 |
| 5.3783 | 0.49 | 60000 | 5.3301 | 0.1988 |
| 5.2869 | 0.57 | 70000 | 5.3140 | 0.2001 |
| 5.3668 | 0.66 | 80000 | 5.2981 | 0.2022 |
| 5.2709 | 0.74 | 90000 | 5.2782 | 0.2043 |
| 5.3297 | 0.82 | 100000 | 5.2619 | 0.2063 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
lins567/1
|
lins567
| 2023-06-19T13:50:44Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-06-19T13:50:44Z |
---
license: bigscience-bloom-rail-1.0
---
|
Falah/News_Detection
|
Falah
| 2023-06-19T13:33:46Z | 0 | 2 |
adapter-transformers
|
[
"adapter-transformers",
"pytorch",
"bert",
"fake news detection",
"NLP",
"text-classification",
"license:openrail",
"region:us"
] |
text-classification
| 2023-06-19T12:37:09Z |
---
license: openrail
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: text-classification
tags:
- fake news detection
- NLP
---
Fake news detection using NLP transformers is an important application of natural language processing techniques. Transformers, such as the popular BERT (Bidirectional Encoder Representations from Transformers) model, have shown promising results in various NLP tasks, including text classification, sentiment analysis, and question answering. When applied to fake news detection, transformers can effectively analyze the textual content of news articles and make predictions about their authenticity.
Here are some key details about fake news detection using NLP transformers:
1. Transformer Architecture: Transformers are based on a self-attention mechanism that allows them to capture contextual relationships between words or tokens in a text. This architecture enables transformers to effectively process and understand the semantic meaning of textual data.
2. Pretraining: NLP transformers are typically pretrained on large-scale corpora to learn general language representations. This pretraining phase helps the model to capture semantic and syntactic patterns in text data, which can be later fine-tuned for specific tasks like fake news detection.
3. Fine-tuning: After pretraining, transformers are fine-tuned on task-specific datasets, which involve labeled examples of fake and real news articles. During fine-tuning, the model learns to classify news articles based on the patterns it has learned during pretraining.
4. Tokenization: Text data is tokenized into smaller units, such as words or subwords, before being fed into the transformer model. Tokenization helps in creating input representations that the model can understand and process efficiently.
5. Training Labels: Fake news detection typically requires a labeled dataset where each news article is annotated as either fake or real. These labels are used during the training process to optimize the model's parameters and make accurate predictions.
6. Model Evaluation: The performance of the fake news detection model is evaluated using standard evaluation metrics such as accuracy, precision, recall, and F1-score. These metrics provide insights into how well the model is able to correctly classify fake and real news articles.
7. Deployment: Once the model is trained and evaluated, it can be deployed in real-world applications to automatically detect and classify news articles. The model takes the textual content of an article as input and predicts its authenticity.
It's important to note that while NLP transformers have shown promising results in fake news detection, they are not foolproof and may have limitations. Building robust fake news detection systems requires careful data collection, preprocessing, and model training techniques to handle the nuances and challenges of the task.
Overall, NLP transformers provide a powerful framework for fake news detection by leveraging the contextual information in text data. They have the potential to contribute significantly to the identification and mitigation of misinformation in various domains.
## Fake News Detection Report
This report provides an overview of the evaluation metrics for the fake news detection model using NLP transformers.
| Metric | Value |
|----------------|-----------|
| eval_loss | 0.093 |
| eval_accuracy | 0.979 |
| eval_precision | 0.980 |
| eval_recall | 0.979 |
| eval_f1 | 0.979 |
| eval_runtime | 19.63s |
| samples/s | 2.394 |
| steps/s | 0.153 |
| epoch | 5.0 |
The evaluation metrics demonstrate the performance of the fake news detection model. It achieved an accuracy of 0.979, precision of 0.980, recall of 0.979, and an F1 score of 0.979. The runtime for evaluation was 19.63 seconds, with a throughput of approximately 2.394 samples per second and 0.153 steps per second. The model was trained for 5.0 epochs.
|
martomor/distilbert-base-uncased-finetuned-emotion
|
martomor
| 2023-06-19T13:28:55Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T13:06:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9228068723042021
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2259
- Accuracy: 0.9225
- F1: 0.9228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8438 | 1.0 | 250 | 0.3163 | 0.9055 | 0.9033 |
| 0.2492 | 2.0 | 500 | 0.2259 | 0.9225 | 0.9228 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
Kcatua/Cabezon
|
Kcatua
| 2023-06-19T13:21:44Z | 0 | 0 | null |
[
"ab",
"ar",
"arxiv:1910.09700",
"region:us"
] | null | 2023-06-19T13:18:24Z |
---
language:
- ab
- ar
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sliha66/Monocka
|
Sliha66
| 2023-06-19T13:10:48Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-19T13:10:48Z |
---
license: creativeml-openrail-m
---
|
dfqryj/fs
|
dfqryj
| 2023-06-19T13:08:09Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-19T13:08:09Z |
---
license: creativeml-openrail-m
---
|
stemmets/q-FrozenLake-v1-4x4-noSlippery
|
stemmets
| 2023-06-19T12:59:19Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T12:59:14Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="stemmets/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AustinCarthy/MixGPT2V2_domain_100KP_BFall_fromB_95K_topP_0.75_ratio2.63
|
AustinCarthy
| 2023-06-19T12:58:02Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-19T10:41:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: MixGPT2V2_domain_100KP_BFall_fromB_95K_topP_0.75_ratio2.63
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MixGPT2V2_domain_100KP_BFall_fromB_95K_topP_0.75_ratio2.63
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_MixGPT2V2_using_benign_95K_top_p_0.75domain dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0175
- Accuracy: 0.9975
- F1: 0.9734
- Precision: 0.9868
- Recall: 0.9604
- Roc Auc Score: 0.9799
- Tpr At Fpr 0.01: 0.9396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0155 | 1.0 | 22121 | 0.0174 | 0.9957 | 0.9543 | 0.9590 | 0.9496 | 0.9738 | 0.786 |
| 0.0083 | 2.0 | 44242 | 0.0215 | 0.9959 | 0.9555 | 0.9903 | 0.923 | 0.9613 | 0.888 |
| 0.0046 | 3.0 | 66363 | 0.0144 | 0.9973 | 0.9717 | 0.9779 | 0.9656 | 0.9823 | 0.5986 |
| 0.0019 | 4.0 | 88484 | 0.0192 | 0.9973 | 0.9714 | 0.9828 | 0.9602 | 0.9797 | 0.9344 |
| 0.0011 | 5.0 | 110605 | 0.0175 | 0.9975 | 0.9734 | 0.9868 | 0.9604 | 0.9799 | 0.9396 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
chencjiajy/q-FrozenLake-v1-4x4-noSlippery
|
chencjiajy
| 2023-06-19T12:50:44Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T12:50:37Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="chencjiajy/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
patrickvonplaten/wav2vec2-large-mms-1b-turkish-colab
|
patrickvonplaten
| 2023-06-19T12:46:37Z | 39 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_6_1",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-19T10:28:38Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
datasets:
- common_voice_6_1
metrics:
- wer
model-index:
- name: wav2vec2-large-mms-1b-turkish-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_6_1
type: common_voice_6_1
config: tr
split: test
args: tr
metrics:
- name: Wer
type: wer
value: 0.22275559187008478
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-mms-1b-turkish-colab
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the common_voice_6_1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1556
- Wer: 0.2228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.905 | 0.92 | 100 | 0.2146 | 0.2796 |
| 0.2901 | 1.83 | 200 | 0.1673 | 0.2317 |
| 0.2659 | 2.75 | 300 | 0.1608 | 0.2293 |
| 0.2398 | 3.67 | 400 | 0.1556 | 0.2228 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Malaika/Reinforce-CartPole-v1-1
|
Malaika
| 2023-06-19T12:46:19Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T12:46:09Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
djc0213/my_awesome_model
|
djc0213
| 2023-06-19T12:42:58Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T09:09:29Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: djc0213/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# djc0213/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0286
- Validation Loss: 0.2731
- Train Accuracy: 0.9325
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.1327 | 0.1906 | 0.9296 | 0 |
| 0.0631 | 0.2219 | 0.9301 | 1 |
| 0.0286 | 0.2731 | 0.9325 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
hivaze/dolly-v2-7b-lora-emphatical_daily_dialogues
|
hivaze
| 2023-06-19T12:40:12Z | 0 | 0 | null |
[
"dataset:hivaze/emphatical_daily_dialogues",
"region:us"
] | null | 2023-06-19T11:48:09Z |
---
datasets:
- hivaze/emphatical_daily_dialogues
---
# Model Card for Model ID
This model is a adapter for databricks/dolly-v2-7b, finetuned on hivaze/emphatical_daily_dialogues.
Main goal of this model is to train model to create emphatical dialogues, which are controlled by instructions.
## Model Details
### Model Description
Prompt template: `"{intro}\n\n### Instruction:\n{instruction}\n\n### Response:\n{response}\n"`\
Example intro: "You are a kind and empathetic interlocutor. You are talking to a person. Below is an instruction that describes a task. Write a response that appropriately completes the request" \
Example instruction: "You try to chit-chat. Complete a phrase, acting like an interlocutor."
Training params:
```
train_args = TrainingArguments(
per_device_train_batch_size=8, # can be 4 with llama
per_device_eval_batch_size=8, # can be 4 with llama
gradient_accumulation_steps=4,
warmup_steps=20,
# max_steps=200,
optim="adamw_torch",
learning_rate=4e-5, # many possible values here from 1e-5 to 2e-4
# save_strategy="steps",
fp16=True,
# bf16=True, # a100 required
num_train_epochs=1,
evaluation_strategy="steps",
eval_steps=50,
save_strategy="steps",
save_steps=400,
logging_strategy="steps",
logging_steps=10,
logging_dir=f"{local_output_dir}/runs",
report_to="tensorboard",
output_dir=local_output_dir
)
```
LoRA config:
```
config = LoraConfig(
r=16, # can be 8 with llama
lora_alpha=32, # can be 16 with llama
# target_modules=["q_proj", "v_proj"],
target_modules=['query_key_value'],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
```
- **Developed by:** hivaze
- **Model type:** LoRA adapter for GPTNeoXForCausalLM
- **Language(s) (NLP):** Primarly english
- **Finetuned from model [optional]:** databricks/dolly-v2-7b
- **Git repository**: https://github.com/hivaze/friendly_chatbot_task
### Tensorboard

|
samhog/psychology-alpaca-merged
|
samhog
| 2023-06-19T12:21:44Z | 20 | 4 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-12T11:49:51Z |
# Psychology Alpaca 🍩
This is a LLaMA-7B language model trained on 10.000 psychology-related prompts and answers generated by ChatGPT. The model was trained on a single A100 GPU from Google Colab. The model shows some knowledge in the field of psychology and generally performs better than its base model parent.
### Background
This model was developed as part of a thesis project in the field of machine learning and psychology. It was used as a base model for further fine-tuning using reinforcement learning. The goal of the thesis was to compare reinforcement learning from *human feedback* and *AI feedback*. When the paper is available, it will be linked here!
**Authors:**
Samuel Höglund, [email protected];
Josef Khedri, [email protected]
|
aiknight87/falcon-7b-instruct-tuned-dolly-500
|
aiknight87
| 2023-06-19T12:11:09Z | 1 | 0 |
peft
|
[
"peft",
"RefinedWebModel",
"custom_code",
"4-bit",
"region:us"
] | null | 2023-06-19T12:09:38Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
hopkins/svo-1
|
hopkins
| 2023-06-19T11:53:18Z | 174 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-19T10:40:59Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: svo-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# svo-1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9072
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 9
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.0334 | 0.73 | 200 | 2.5510 |
| 2.2609 | 1.45 | 400 | 2.0441 |
| 2.0306 | 2.18 | 600 | 1.9798 |
| 1.9782 | 2.91 | 800 | 1.9590 |
| 1.998 | 3.63 | 1000 | 1.9511 |
| 1.9482 | 4.36 | 1200 | 1.9366 |
| 1.9337 | 5.09 | 1400 | 1.9268 |
| 1.9093 | 5.82 | 1600 | 1.9175 |
| 1.8956 | 6.54 | 1800 | 1.9126 |
| 1.8789 | 7.27 | 2000 | 1.9094 |
| 1.8525 | 8.0 | 2200 | 1.9038 |
| 1.8325 | 8.73 | 2400 | 1.9072 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.12.0
- Tokenizers 0.13.3
|
VineX/VxPhotoTalk
|
VineX
| 2023-06-19T11:51:03Z | 0 | 0 | null |
[
"arxiv:2004.12992",
"region:us"
] | null | 2023-06-19T11:10:50Z |
# MakeItTalk: Speaker-Aware Talking-Head Animation
This is the code repository implementing the paper:
> **MakeItTalk: Speaker-Aware Talking-Head Animation**
>
> [Yang Zhou](https://people.umass.edu/~yangzhou),
> [Xintong Han](http://users.umiacs.umd.edu/~xintong/),
> [Eli Shechtman](https://research.adobe.com/person/eli-shechtman),
> [Jose Echevarria](http://www.jiechevarria.com) ,
> [Evangelos Kalogerakis](https://people.cs.umass.edu/~kalo/),
> [Dingzeyu Li](https://dingzeyu.li)
>
> SIGGRAPH Asia 2020
>
> **Abstract** We present a method that generates expressive talking-head videos from a single facial image with audio as the only input. In contrast to previous attempts to learn direct mappings from audio to raw pixels for creating talking faces, our method first disentangles the content and speaker information in the input audio signal. The audio content robustly controls the motion of lips and nearby facial regions, while the speaker information determines the specifics of facial expressions and the rest of the talking-head dynamics. Another key component of our method is the prediction of facial landmarks reflecting the speaker-aware dynamics. Based on this intermediate representation, our method works with many portrait images in a single unified framework, including artistic paintings, sketches, 2D cartoon characters, Japanese mangas, and stylized caricatures.
In addition, our method generalizes well for faces and characters that were not observed during training. We present extensive quantitative and qualitative evaluation of our method, in addition to user studies, demonstrating generated talking-heads of significantly higher quality compared to prior state-of-the-art methods.
>
> [[Project page]](https://people.umass.edu/~yangzhou/MakeItTalk/)
> [[Paper]](https://people.umass.edu/~yangzhou/MakeItTalk/MakeItTalk_SIGGRAPH_Asia_Final_round-5.pdf)
> [[Video]](https://www.youtube.com/watch?v=OU6Ctzhpc6s)
> [[Arxiv]](https://arxiv.org/abs/2004.12992)
> [[Colab Demo]](quick_demo.ipynb)
> [[Colab Demo TDLR]](quick_demo_tdlr.ipynb)

Figure. Given an audio speech signal and a single portrait image as input (left), our model generates speaker-aware talking-head animations (right).
Both the speech signal and the input face image are not observed during the model training process.
Our method creates both non-photorealistic cartoon animations (top) and natural human face videos (bottom).
## Updates
- [x] Generate new puppet! (tested on Ubuntu)
- [x] Pre-trained models
- [x] Google colab quick demo for natural faces [[detail]](quick_demo.ipynb) [[TDLR]](quick_demo_tdlr.ipynb)
- [ ] Training code for each module
## Requirements
- Python environment 3.6
```
conda create -n makeittalk_env python=3.6
conda activate makeittalk_env
```
- ffmpeg (https://ffmpeg.org/download.html)
```
sudo apt-get install ffmpeg
```
- python packages
```
pip install -r requirements.txt
```
- `winehq-stable` for cartoon face warping in Ubuntu (https://wiki.winehq.org/Ubuntu). Tested on Ubuntu16.04, wine==5.0.3.
```
sudo dpkg --add-architecture i386
wget -nc https://dl.winehq.org/wine-builds/winehq.key
sudo apt-key add winehq.key
sudo apt-add-repository 'deb https://dl.winehq.org/wine-builds/ubuntu/ xenial main'
sudo apt update
sudo apt install --install-recommends winehq-stable
```
## Pre-trained Models
Download the following pre-trained models to `examples/ckpt` folder for testing your own animation.
| Model | Link to the model |
| :-------------: | :---------------: |
| Voice Conversion | [Link](https://drive.google.com/file/d/1ZiwPp_h62LtjU0DwpelLUoodKPR85K7x/view?usp=sharing) |
| Speech Content Module | [Link](https://drive.google.com/file/d/1r3bfEvTVl6pCNw5xwUhEglwDHjWtAqQp/view?usp=sharing) |
| Speaker-aware Module | [Link](https://drive.google.com/file/d/1rV0jkyDqPW-aDJcj7xSO6Zt1zSXqn1mu/view?usp=sharing) |
| Image2Image Translation Module | [Link](https://drive.google.com/file/d/1i2LJXKp-yWKIEEgJ7C6cE3_2NirfY_0a/view?usp=sharing) |
| Non-photorealistic Warping (.exe) | [Link](https://drive.google.com/file/d/1rlj0PAUMdX8TLuywsn6ds_G6L63nAu0P/view?usp=sharing) |
## Animate You Portraits!
- Download pre-trained embedding [[here]](https://drive.google.com/file/d/18-0CYl5E6ungS3H4rRSHjfYvvm-WwjTI/view?usp=sharing) and save to `examples/dump` folder.
### _Nature Human Faces / Paintings_
- crop your portrait image into size `256x256` and put it under `examples` folder with `.jpg` format.
Make sure the head is almost in the middle (check existing examples for a reference).
- put test audio files under `examples` folder as well with `.wav` format.
- animate!
```
python main_end2end.py --jpg <portrait_file>
```
- use addition args `--amp_lip_x <x> --amp_lip_y <y> --amp_pos <pos>`
to amply lip motion (in x/y-axis direction) and head motion displacements, default values are `<x>=2., <y>=2., <pos>=.5`
### _Cartoon Faces_
- put test audio files under `examples` folder as well with `.wav` format.
- animate one of the existing puppets
| Puppet Name | wilk | smiling_person | sketch | color | cartoonM | danbooru1 |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| Image |  |  |  |  |  |  |
```
python main_end2end_cartoon.py --jpg <cartoon_puppet_name_with_extension> --jpg_bg <puppet_background_with_extension>
```
- `--jpg_bg` takes a same-size image as the background image to create the animation, such as the puppet's body, the overall fixed background image. If you want to use the background, make sure the puppet face image (i.e. `--jpg` image) is in `png` format and is transparent on the non-face area. If you don't need any background, please also create a same-size image (e.g. a pure white image) to hold the argument place.
- use addition args `--amp_lip_x <x> --amp_lip_y <y> --amp_pos <pos>`
to amply lip motion (in x/y-axis direction) and head motion displacements, default values are `<x>=2., <y>=2., <pos>=.5`
### _Generate Your New Puppet_
- put the cartoon image under `examples_cartoon`
- install conda environment `foa_env_py2` (tested on python 2) for Face-of-art (https://github.com/papulke/face-of-art).
Download the pre-trained weight [here](https://www.dropbox.com/sh/hrxcyug1bmbj6cs/AAAxq_zI5eawcLjM8zvUwaXha?dl=0) and put it under `examples/ckpt`.
Activate the environment.
```
source activate foa_env_py2
```
- create necessary files to animate your cartoon image, i.e.
`<your_puppet>_open_mouth.txt`, `<your_puppet>_close_mouth.txt`, `<your_puppet>_open_mouth_norm.txt`, `<your_puppet>_scale_shift.txt`, `<your_puppet>_delauney.txt`
```
python main_gen_new_puppet.py <your_puppet_with_file_extension>
```
- in details, it takes 3 steps
- Face-of-art automatic cartoon landmark detection.
- If it's wrong or not accurate, you can use our tool to drag and refine the landmarks.
- Estimate the closed mouth landmarks to serve as network input.
- Delauney triangulate the image with landmarks.
- check puppet name `smiling_person_example.png` for an example.
|  |  | 
| :---: | :---: | :---: |
| Landmark Adjustment Tool | Closed lips estimation | Delaunay Triangulation |
## Train
### Train Voice Conversion Module
Todo...
### Train Content Branch
- Create dataset root directory `<root_dir>`
- Dataset: Download preprocessed dataset [[here]](https://drive.google.com/drive/folders/1EwuAy3j1b9Zc1MsidUfxG_pJGc_cV60O?usp=sharing), and put it under `<root_dir>/dump`.
- Train script: Run script below. Models will be saved in `<root_dir>/ckpt/<train_instance_name>`.
```shell script
python main_train_content.py --train --write --root_dir <root_dir> --name <train_instance_name>
```
### Train Speaker-Aware Branch
Todo...
### Train Image-to-Image Translation
Todo...
## [License](LICENSE.md)
## Acknowledgement
We would like to thank Timothy Langlois for the narration, and
[Kaizhi Qian](https://scholar.google.com/citations?user=uEpr4C4AAAAJ&hl=en)
for the help with the [voice conversion module](https://auspicious3000.github.io/icassp-2020-demo/).
We thank [Jakub Fiser](https://research.adobe.com/person/jakub-fiser/) for implementing the real-time GPU version of the triangle morphing algorithm.
We thank Daichi Ito for sharing the caricature image and Dave Werner
for Wilk, the gruff but ultimately lovable puppet.
This research is partially funded by NSF (EAGER-1942069)
and a gift from Adobe. Our experiments were performed in the
UMass GPU cluster obtained under the Collaborative Fund managed
by the MassTech Collaborative.
|
metalwhale/openbuddy-openllama-7b-v5-q4_0
|
metalwhale
| 2023-06-19T11:46:58Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-06-19T11:39:45Z |
---
license: apache-2.0
---
## How to reproduce
```bash
# Prerequisites
apt update -y
apt install -y git git-lfs python3 python3-pip curl pkg-config libssl-dev
python3 -m pip install numpy==1.25.0 sentencepiece==0.1.99
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh && source "$HOME/.cargo/env"
# Clone repositories
git clone https://huggingface.co/OpenBuddy/openbuddy-openllama-7b-v5-fp16 # Commit hash 1fedac68b34952eecec849a5938b778d6004d632
git clone https://github.com/ggerganov/llama.cpp # Commit hash 16b9cd193965769089881bb8ec012fccca7b37b6
git clone --recurse-submodules https://github.com/rustformers/llm.git # Commit hash 3becd728c0d6eeb2d649f86158c7018d5aaaba40
# Build ggml model
cd llama.cpp/
python3 convert.py ../openbuddy-openllama-7b-v5-fp16/
cd ../llm/
cargo build --release
cargo run --release llama quantize ../openbuddy-openllama-7b-v5-fp16/ggml-model-f16.bin ../openbuddy-openllama-7b-v5-fp16/openbuddy-openllama-7b-v5-q4_0.bin q4_0
```
(The commit hashes are confirmed at the time of 2023/06/19)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.