modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 18:27:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 18:22:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
christinakyp/whisper-tiny.en-train1
|
christinakyp
| 2023-06-19T11:21:07Z | 86 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"asr",
"generated_from_trainer",
"en",
"dataset:christinakyp/dsing1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-19T08:37:22Z |
---
language:
- en
license: apache-2.0
tags:
- asr
- generated_from_trainer
datasets:
- christinakyp/dsing1
model-index:
- name: Whisper Tiny Sing - CK
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Sing - CK
This model is a fine-tuned version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) on the DSing1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
antphb/DS-Chatbox-gpt2-vietnamese-V3
|
antphb
| 2023-06-19T11:13:12Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-18T07:40:12Z |
---
tags:
- generated_from_trainer
model-index:
- name: DS-Chatbox-gpt2-vietnamese-V3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DS-Chatbox-gpt2-vietnamese-V3
This model is a fine-tuned version of [NlpHUST/gpt2-vietnamese](https://huggingface.co/NlpHUST/gpt2-vietnamese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0015
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.9759 | 0.66 | 1400 | 2.6781 |
| 2.5019 | 1.31 | 2800 | 2.4921 |
| 2.3352 | 1.97 | 4200 | 2.3726 |
| 2.0759 | 2.62 | 5600 | 2.3240 |
| 1.9303 | 3.28 | 7000 | 2.3279 |
| 1.7867 | 3.93 | 8400 | 2.2556 |
| 1.5133 | 4.59 | 9800 | 2.3424 |
| 1.3726 | 5.25 | 11200 | 2.5290 |
| 1.1925 | 5.9 | 12600 | 2.5132 |
| 1.0211 | 6.56 | 14000 | 2.7322 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
SouhilOuchene/sentence-camembert-base
|
SouhilOuchene
| 2023-06-19T11:05:13Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"camembert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-18T21:10:26Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# SouhilOuchene/sentence-camembert-base
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("SouhilOuchene/sentence-camembert-base")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
LordSomen/lunar_radar_v1_234
|
LordSomen
| 2023-06-19T10:58:10Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T10:57:51Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: lunar_radar
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.01 +/- 17.33
name: mean_reward
verified: false
---
# **lunar_radar** Agent playing **LunarLander-v2**
This is a trained model of a **lunar_radar** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YeungNLP/firefly-baichuan-7b-qlora-sft
|
YeungNLP
| 2023-06-19T10:49:34Z | 0 | 17 | null |
[
"region:us"
] | null | 2023-06-18T17:31:11Z |
QLoRA+百万数据对baichun-7b模型进行高效指令微调
Github: https://github.com/yangjianxin1/Firefly
|
kejolong/mechaarmor
|
kejolong
| 2023-06-19T10:48:05Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-19T10:45:36Z |
---
license: creativeml-openrail-m
---
|
jeel/question-Classification
|
jeel
| 2023-06-19T10:26:09Z | 0 | 0 |
tensorflowtts
|
[
"tensorflowtts",
"text-classification",
"en",
"license:afl-3.0",
"region:us"
] |
text-classification
| 2023-06-19T10:24:16Z |
---
license: afl-3.0
language:
- en
metrics:
- accuracy
library_name: tensorflowtts
pipeline_tag: text-classification
---
|
tux/a2c-PandaReachDense-v2
|
tux
| 2023-06-19T10:17:53Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T10:15:08Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.44 +/- 0.21
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Malaika/dqn-SpaceInvadersNoFrameskip-v4
|
Malaika
| 2023-06-19T10:11:45Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-18T19:59:27Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 336.00 +/- 102.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Malaika -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Malaika -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Malaika
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
DicksonMassawe/finetuning-spam-detection-model
|
DicksonMassawe
| 2023-06-19T10:10:45Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T16:23:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-spam-detection-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-spam-detection-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0065
- Accuracy: 0.9990
- F1: 0.9990
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0958 | 1.0 | 965 | 0.0041 | 0.9995 | 0.9995 |
| 0.009 | 2.0 | 1930 | 0.0042 | 0.9995 | 0.9995 |
| 0.0018 | 3.0 | 2895 | 0.0049 | 0.9995 | 0.9995 |
| 0.001 | 4.0 | 3860 | 0.0034 | 0.9995 | 0.9995 |
| 0.0 | 5.0 | 4825 | 0.0065 | 0.9990 | 0.9990 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
alawryaguila/dog_results
|
alawryaguila
| 2023-06-19T10:06:31Z | 31 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-14T09:22:40Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - alawryaguila/dog_results
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
JaeHwi/my_awesome_rot_clm-model
|
JaeHwi
| 2023-06-19T10:03:20Z | 219 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-19T08:16:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_rot_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_rot_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 197 | 4.5097 |
| No log | 2.0 | 394 | 4.4644 |
| 4.5685 | 3.0 | 591 | 4.4531 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Ditrip/ppoCleanRL-LunarLander-v2
|
Ditrip
| 2023-06-19T09:39:12Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T09:39:07Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -106.68 +/- 48.84
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
phi0112358/PMC_LLaMA-7B-ggml
|
phi0112358
| 2023-06-19T09:21:32Z | 0 | 3 | null |
[
"llama",
"ggml",
"pubmed",
"medicine",
"research",
"papers",
"en",
"dataset:S2ORC",
"region:us"
] | null | 2023-06-19T02:41:49Z |
---
datasets:
- S2ORC
language:
- en
tags:
- llama
- ggml
- pubmed
- medicine
- research
- papers
---
# ---
---
# PMC_LLaMA - finetuned on PubMed Central papers
**This is a ggml conversion of chaoyi-wu's [PMC_LLAMA_7B_10_epoch](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B_10_epoch) model.**
**It is a LLaMA model which is finetuned on PubMed Central papers from**
**The Semantic Scholar Open Research Coprus [dataset](https://github.com/allenai/s2orc).**
Currently I have only converted it into **new k-quant method Q5_K_M**. I will gladly make more versions on request.
Other possible quantizations include: q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q5_K_M, q6_K
Compatible with **llama.cpp**, but also with:
- **text-generation-webui**
- **KoboldCpp**
- **ParisNeo/GPT4All-UI**
- **llama-cpp-python**
- **ctransformers**
---
# CAVE!
Being a professional myself and having tested the model, I can strongly advise that this model is best left in the hands of professionals.
This model can produce very detailed and elaborate responses, but it tends to confabulate quite often in my opinion (considering the field of use).
Because of the detail accuracy, it is difficult for a layperson to tell when the model is returning facts and when it is returning bullshit.
– so unless you are a subject matter expert (biology, medicine, chemistry, pharmacy, etc) I appeal to your sense of responsibility and ask you:
**to use the model only for testing, exploration, and just-for-fun. In no case should the answers of this model lead to implications that affect your health.**
---
Here is what the autor/s write in the original model [card](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B_10_epoch/blob/main/README.md):
```
This repo contains the latest version of PMC_LLaMA_7B, which is LLaMA-7b finetuned on the PMC papers in the S2ORC dataset.
Notably, different from chaoyi-wu/PMC_LLAMA_7B, this model is further trained for 10 epochs.
The model was trained with the following hyperparameters:
Epochs: 10
Batch size: 128
Cutoff length: 512
Learning rate: 2e-5
Each epoch we sample 512 tokens per paper for training.
```
---
### That's it!
If you have any further questions, feel free to contact me or start a discussion
|
vpelloin/MEDIA_NLU-flaubert_base_cased
|
vpelloin
| 2023-06-19T09:20:30Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"flaubert",
"token-classification",
"bert",
"natural language understanding",
"NLU",
"spoken language understanding",
"SLU",
"understanding",
"MEDIA",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-31T16:19:12Z |
---
language: fr
pipeline_tag: "token-classification"
widget:
- text: "je voudrais réserver une chambre à paris pour demain et lundi"
- text: "d'accord pour l'hôtel à quatre vingt dix euros la nuit"
- text: "deux nuits s'il vous plait"
- text: "dans un hôtel avec piscine à marseille"
tags:
- bert
- flaubert
- natural language understanding
- NLU
- spoken language understanding
- SLU
- understanding
- MEDIA
---
# vpelloin/MEDIA_NLU-flaubert_base_cased
This is a Natural Language Understanding (NLU) model for the French [MEDIA benchmark](https://catalogue.elra.info/en-us/repository/browse/ELRA-S0272/).
It maps each input words into outputs concepts tags (76 available).
This model is trained using [`flaubert/flaubert_base_cased`](https://huggingface.co/flaubert/flaubert_base_cased) as its inital checkpoint. It obtained 13.20% CER (*lower is better*) in the MEDIA test set, in [our Interspeech 2023 publication](http://doi.org/10.21437/Interspeech.2022-352), using Kaldi ASR transcriptions.
## Available MEDIA NLU models:
- [`vpelloin/MEDIA_NLU-flaubert_base_cased`](https://huggingface.co/vpelloin/MEDIA_NLU-flaubert_base_cased): MEDIA NLU model trained using [`flaubert/flaubert_base_cased`](https://huggingface.co/flaubert/flaubert_base_cased). Obtains 13.20% CER on MEDIA test.
- [`vpelloin/MEDIA_NLU-flaubert_base_uncased`](https://huggingface.co/vpelloin/MEDIA_NLU-flaubert_base_uncased): MEDIA NLU model trained using [`flaubert/flaubert_base_uncased`](https://huggingface.co/flaubert/flaubert_base_uncased). Obtains 12.40% CER on MEDIA test.
- [`vpelloin/MEDIA_NLU-flaubert_oral_ft`](https://huggingface.co/vpelloin/MEDIA_NLU-flaubert_oral_ft): MEDIA NLU model trained using [`nherve/flaubert-oral-ft`](https://huggingface.co/nherve/flaubert-oral-ft). Obtains 11.98% CER on MEDIA test.
- [`vpelloin/MEDIA_NLU-flaubert_oral_mixed`](https://huggingface.co/vpelloin/MEDIA_NLU-flaubert_oral_mixed): MEDIA NLU model trained using [`nherve/flaubert-oral-mixed`](https://huggingface.co/nherve/flaubert-oral-mixed). Obtains 12.47% CER on MEDIA test.
- [`vpelloin/MEDIA_NLU-flaubert_oral_asr`](https://huggingface.co/vpelloin/MEDIA_NLU-flaubert_oral_asr): MEDIA NLU model trained using [`nherve/flaubert-oral-asr`](https://huggingface.co/nherve/flaubert-oral-asr). Obtains 12.43% CER on MEDIA test.
- [`vpelloin/MEDIA_NLU-flaubert_oral_asr_nb`](https://huggingface.co/vpelloin/MEDIA_NLU-flaubert_oral_asr_nb): MEDIA NLU model trained using [`nherve/flaubert-oral-asr_nb`](https://huggingface.co/nherve/flaubert-oral-asr_nb). Obtains 12.24% CER on MEDIA test.
## Usage with Pipeline
```python
from transformers import pipeline
generator = pipeline(
model="vpelloin/MEDIA_NLU-flaubert_base_cased",
task="token-classification"
)
sentences = [
"je voudrais réserver une chambre à paris pour demain et lundi",
"d'accord pour l'hôtel à quatre vingt dix euros la nuit",
"deux nuits s'il vous plait",
"dans un hôtel avec piscine à marseille"
]
for sentence in sentences:
print([(tok['word'], tok['entity']) for tok in generator(sentence)])
```
## Usage with AutoTokenizer/AutoModel
```python
from transformers import (
AutoTokenizer,
AutoModelForTokenClassification
)
tokenizer = AutoTokenizer.from_pretrained(
"vpelloin/MEDIA_NLU-flaubert_base_cased"
)
model = AutoModelForTokenClassification.from_pretrained(
"vpelloin/MEDIA_NLU-flaubert_base_cased"
)
sentences = [
"je voudrais réserver une chambre à paris pour demain et lundi",
"d'accord pour l'hôtel à quatre vingt dix euros la nuit",
"deux nuits s'il vous plait",
"dans un hôtel avec piscine à marseille"
]
inputs = tokenizer(sentences, padding=True, return_tensors='pt')
outputs = model(**inputs).logits
print([
[model.config.id2label[i] for i in b]
for b in outputs.argmax(dim=-1).tolist()
])
```
## Reference
If you use this model for your scientific publication, or if you find the resources in this repository useful, please cite the [following paper](http://doi.org/10.21437/Interspeech.2022-352):
```
@inproceedings{pelloin22_interspeech,
author={Valentin Pelloin and Franck Dary and Nicolas Hervé and Benoit Favre and Nathalie Camelin and Antoine LAURENT and Laurent Besacier},
title={ASR-Generated Text for Language Model Pre-training Applied to Speech Tasks},
year=2022,
booktitle={Proc. Interspeech 2022},
pages={3453--3457},
doi={10.21437/Interspeech.2022-352}
}
```
|
NasimB/gpt2_left_out_cbt
|
NasimB
| 2023-06-19T09:02:46Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-19T04:33:00Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2_left_out_cbt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_left_out_cbt
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 5.9415 | 0.25 | 500 | 5.0484 |
| 4.7309 | 0.5 | 1000 | 4.6647 |
| 4.4299 | 0.75 | 1500 | 4.4406 |
| 4.2176 | 1.0 | 2000 | 4.3150 |
| 4.0185 | 1.25 | 2500 | 4.2114 |
| 3.9488 | 1.5 | 3000 | 4.1259 |
| 3.8765 | 1.75 | 3500 | 4.0526 |
| 3.8178 | 2.0 | 4000 | 4.0013 |
| 3.6252 | 2.25 | 4500 | 3.9677 |
| 3.6126 | 2.5 | 5000 | 3.9257 |
| 3.5877 | 2.75 | 5500 | 3.8858 |
| 3.5664 | 3.0 | 6000 | 3.8677 |
| 3.3588 | 3.25 | 6500 | 3.8655 |
| 3.3839 | 3.5 | 7000 | 3.8408 |
| 3.375 | 3.75 | 7500 | 3.8131 |
| 3.3702 | 4.0 | 8000 | 3.8096 |
| 3.1311 | 4.25 | 8500 | 3.8272 |
| 3.1718 | 4.5 | 9000 | 3.8071 |
| 3.1835 | 4.75 | 9500 | 3.7894 |
| 3.1802 | 5.01 | 10000 | 3.7920 |
| 2.9241 | 5.26 | 10500 | 3.8233 |
| 2.9597 | 5.51 | 11000 | 3.8156 |
| 2.9773 | 5.76 | 11500 | 3.8019 |
| 2.9708 | 6.01 | 12000 | 3.8077 |
| 2.7159 | 6.26 | 12500 | 3.8440 |
| 2.7495 | 6.51 | 13000 | 3.8459 |
| 2.761 | 6.76 | 13500 | 3.8406 |
| 2.7542 | 7.01 | 14000 | 3.8461 |
| 2.5238 | 7.26 | 14500 | 3.8832 |
| 2.5459 | 7.51 | 15000 | 3.8868 |
| 2.5638 | 7.76 | 15500 | 3.8872 |
| 2.555 | 8.01 | 16000 | 3.8932 |
| 2.388 | 8.26 | 16500 | 3.9161 |
| 2.4017 | 8.51 | 17000 | 3.9215 |
| 2.4056 | 8.76 | 17500 | 3.9236 |
| 2.3998 | 9.01 | 18000 | 3.9254 |
| 2.3199 | 9.26 | 18500 | 3.9339 |
| 2.3241 | 9.51 | 19000 | 3.9359 |
| 2.3205 | 9.76 | 19500 | 3.9359 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
SHENMU007/neunit_BASE_V10.2
|
SHENMU007
| 2023-06-19T08:59:11Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-06-19T06:14:10Z |
---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
limcheekin/falcon-7b-instruct-ct2
|
limcheekin
| 2023-06-19T08:53:55Z | 5 | 1 |
transformers
|
[
"transformers",
"ctranslate2",
"falcon-7b-instruct",
"quantization",
"int8",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-07T02:11:43Z |
---
license: apache-2.0
language:
- en
tags:
- ctranslate2
- falcon-7b-instruct
- quantization
- int8
---
# Falcon-7B-Instruct Q8
The model is quantized version of the [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) with int8 quantization.
## Model Details
### Model Description
The model being quantized using [CTranslate2](https://opennmt.net/CTranslate2/) with the following command:
```
ct2-transformers-converter --model tiiuae/falcon-7b-instruct --output_dir tiiuae/falcon-7b-instruct-ct2 --copy_files tokenizer.json tokenizer_config.json special_tokens_map.json generation_config.json --quantization int8 --force --low_cpu_mem_usage --trust_remote_code
```
If you want to perform the quantization yourself, you need to install the following dependencies:
```
pip install -qU ctranslate2 transformers[torch] accelerate einops
```
- **Shared by:** Lim Chee Kin
- **License:** Apache 2.0
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import ctranslate2
import transformers
generator = ctranslate2.Generator("limcheekin/falcon-7b-instruct-ct2")
tokenizer = transformers.AutoTokenizer.from_pretrained("limcheekin/falcon-7b-instruct-ct2")
prompt = "Long long time ago, "
tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt))
results = generator.generate_batch([tokens], max_length=256, sampling_topk=10)
text = tokenizer.decode(results[0].sequences_ids[0])
```
The code is taken from https://opennmt.net/CTranslate2/guides/transformers.html#mpt.
The key method of the code above is `generate_batch`, you can find out [its supported parameters here](https://opennmt.net/CTranslate2/python/ctranslate2.Generator.html#ctranslate2.Generator.generate_batch).
|
AtomGradient/mlm_inner_lab
|
AtomGradient
| 2023-06-19T08:42:09Z | 178 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-19T08:36:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mlm_inner_testl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mlm_inner_testl
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9885
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2433 | 1.0 | 1142 | 2.0346 |
| 2.1717 | 2.0 | 2284 | 2.0135 |
| 2.1135 | 3.0 | 3426 | 1.9885 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gaioNL/a2c-AntBulletEnv-v0
|
gaioNL
| 2023-06-19T08:35:28Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T08:34:22Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1552.16 +/- 135.46
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PraveenJesu/openai-whisper-small-zoomrx-colab-2
|
PraveenJesu
| 2023-06-19T08:07:53Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"arxiv:2212.04356",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-19T07:39:51Z |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- no
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: whisper-small
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.432213777886737
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 7.628304527060248
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args:
language: hi
metrics:
- name: Test WER
type: wer
value: 87.3
pipeline_tag: automatic-speech-recognition
license: apache-2.0
---
# Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
**Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
copied and pasted from the original model card.
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
The models were trained on either English-only data or multilingual data. The English-only models were trained
on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
For speech translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes.
The smallest four are trained on either English-only or multilingual data.
The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
# Usage
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens
are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order:
1. The transcription always starts with the `<|startoftranscript|>` token
2. The second token is the language token (e.g. `<|en|>` for English)
3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation
4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction
Thus, a typical sequence of context tokens might look as follows:
```
<|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|>
```
Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps.
These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at
each position. This allows one to control the output language and task for the Whisper model. If they are un-forced,
the Whisper model will automatically predict the output langauge and task itself.
The context tokens can be set accordingly:
```python
model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe")
```
Which forces the model to predict in English under the task of speech recognition.
## Transcription
### English to English
In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language
(English) and task (transcribe).
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-small")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
>>> model.config.forced_decoder_ids = None
>>> # load dummy dataset and read audio files
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
```
The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
### French to French
The following example demonstrates French to French transcription by setting the decoder ids appropriately.
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-small")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids)
['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Un vrai travail intéressant va enfin être mené sur ce sujet.']
```
## Translation
Setting the task to "translate" forces the Whisper model to perform speech translation.
### French to English
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-small")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' A very interesting work, we will finally be given on this subject.']
```
## Evaluation
This code snippet shows how to evaluate Whisper Small on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
```python
>>> from datasets import load_dataset
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
>>> import torch
>>> from evaluate import load
>>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-small")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to("cuda")
>>> def map_to_pred(batch):
>>> audio = batch["audio"]
>>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
>>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
>>>
>>> with torch.no_grad():
>>> predicted_ids = model.generate(input_features.to("cuda"))[0]
>>> transcription = processor.decode(predicted_ids)
>>> batch["prediction"] = processor.tokenizer._normalize(transcription)
>>> return batch
>>> result = librispeech_test_clean.map(map_to_pred)
>>> wer = load("wer")
>>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
3.432213777886737
```
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="openai/whisper-small",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy(), batch_size=8)["text"]
" Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
[{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
'timestamp': (0.0, 5.44)}]
```
Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
AtomGradient/gpt2_causal_inner_lab
|
AtomGradient
| 2023-06-19T08:06:57Z | 213 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-19T08:01:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt2_causal_innertest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_causal_innertest
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7104 | 1.0 | 1133 | 3.7341 |
| 3.6507 | 2.0 | 2266 | 3.7278 |
| 3.6263 | 3.0 | 3399 | 3.7260 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Kongfha/KlonSuphap-LM
|
Kongfha
| 2023-06-19T07:52:31Z | 158 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"th",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-06T05:47:56Z |
---
license: mit
language:
- th
pipeline_tag: text-generation
widget:
- text: ชมวิหคนกไม้ในวิ<s2>ถี</s2>
- text: มัจฉาใน
- text: มิตรแท้
- text: แม้นชีวี
---
# 🌾 KlonSuphap-LM (แต่งกลอนแปด ด้วย GPT-2)
Visit Demo Space -> [Kongfha/KlonSuphap-Generator](https://huggingface.co/spaces/Kongfha/KlonSuphap-Generator) <br>
Visit GitHub Repository -> [Kongfha/KlonSuphap-LM](https://github.com/Kongfha/KlonSuphap-LM/) <br>
Visit Blog (Thai Language) -> [🌾 KlonSuphap-LM แต่งกลอนแปด ด้วย GPT-2](https://medium.com/@kampanatyingseree4704/klonsuphap-lm-%E0%B9%81%E0%B8%95%E0%B9%88%E0%B8%87%E0%B8%81%E0%B8%A5%E0%B8%AD%E0%B8%99%E0%B9%81%E0%B8%9B%E0%B8%94-%E0%B8%94%E0%B9%89%E0%B8%A7%E0%B8%A2-gpt-2-d2baffc80907)
**KlonSuphap-LM** or GPT-2 for Thai poems (Klon-Paed Poem).
I use [GPT-2 base Thai](https://huggingface.co/flax-community/gpt2-base-thai) as a pre-trained model for fine-tuning exclusively
on Thai Klon-Paed Poem (กลอนแปด) retrieved from [Thai Literature Corpora (TLC)](https://attapol.github.io/tlc.html?fbclid=IwAR1UGV8hKGphwcuRCOCjJkVE4nC9yQ1_M_lFnxx9CLl9IzVKGK_mtbotQzU) dataset.
Prior to my recent poem-generation model, [PhraAphaiManee-LM](https://huggingface.co/Kongfha/PhraAphaiManee-LM/), although the model can perform a
depiction of Thai Klon-Paed Poems, it still does not adhere to the rules of Thai Klon-Paed (ฉันทลักษณ์) in its generated output. To overcome this challenge I developed techniques that make the model to be more adhere to rules are as follows.
1. **Fine-Tuning dataset preprocessing.<br>**
  As I have a limited quantity of Thai Klon-Paed Poem or about 65770 lines (บาท), thus to succeed in the objective of making the model to be more adhere to rules,
I developed a technique called ***"Rhyme Tagging"***. <br>
  ***"Rhyme Tagging"*** performs tag insertion before and after words that are expected to rhyme with the other words based on Klon-Paed Rules. <br>
<u>**Example**</u><br>
>  พอได้ยินเสียงระฆังข้างหลัง\<s2>เขา\</s2><br>เห็นผู้\<es2>เฒ่า\</es2>ออกจากชะวาก\<s2>ผา\</s2><br>สรรพางค์ร่างกายแก่ช\<es2>รา\</es2><br>แต่ผิว\<es2>หน้า\</es2>นั้นละม้ายคล้ายทา\<s3>รก\</s3>  
With ***"Rhyme Tagging"***, the potential loss of rhyme information due to an overwhelming flood of non-rhyme-related data can be mitigated. This approach aids the self-attention mechanism in extracting a greater amount of rhyme-related information, ensuring its preservation and relevance throughout the processing.
2. **Applying Attention-Mask while fine-tuning.<br>**
  Apart from performing a common fine-tuning process using the preprocessed dataset, I did fine-tune the model by applying Attention-Mask to non-rhyme-related words to the dataset as following visualization.<br>
<u>**Visualized Example**</u><br>
>  ------------------------------\<s2>เขา\</s2><br>-----\<es2>เฒ่า\</es2>--------------------\<s2>ผา\</s2><br>---------------------------\<es2>รา\</es2><br>------\<es2>หน้า\</es2>-----------------------\<s3>รก\</s3>  
By applying Attention-Mask while fine-tuning, the model can prioritize the extraction of information from both the rhyme-tags and their surrounding words without dropping positional information.
This enhances the model's performance in subsequent stages of fine-tuning as if the model were constructing lookup table for rhyme-related words.
3. **Performing Reinforcement Learning<br>**
  After the stage of Supervised Fine-Tuning, I perform Reinforcement Learning to the model using [voidful/TextRL](https://github.com/voidful/TextRL) by defining ***Klon-Paed Grader*** as a PPO Environment.<br>
  I perform Reinforcement Learning by randomly pick initial 2-5 syllables from the validation set as text inputs in an observation list, then I force the model to generate only 1 line (บาท) which has only 1 rhyme pair.<br>
  TextRL will repeatedly feed text inputs from the observation list to the model and calculate the reward using my ***Klon-Paed Grader***, then update the model's weights based on rewards it recieved.
## Cherry-Picked Examples From Demo (Top-P 0.8 Temp 0.8)
>  ปัญญาประดิษฐ์องค์ทรงสุรดี<br>เห็นสุดมีบังคมก้มเกศา<br>ต่างยิ้มละลูกยับลงตรงบันลา<br>ถึงว่ารุ่งรางสว่างกลางนวัง  
>  ขอขอบคุณบุญกุศลจิต<br>เป็นเพื่อนคิดจะเป็นคู่เคหา<br>ต่างคนกับเหล่านางสร้อยตา<br>ต้องมาก็จะมาไปว่าไร  
>  ทรานส์ฟอร์เมอร์มีเซลฟ์แอตเทนชัน<br>ขึ้นบรรลักษณ์ก็เหลือบเขียนฉงน<br>ที่จับต้อนแต่เรือนเพื่อนเหมือนอย่างวน<br>จะต้องชวนมาช่วยให้เชยชม  
## Example use
``` py
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "Kongfha/KlonSuphap-LM"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
generate = pipeline("text-generation",
model=model,
tokenizer=tokenizer)
input_sentence = "มิตรแท้"
generated_text = generate(input_sentence,
max_length=160,
top_p=0.85,
temperature=1)
# generation parameters can be varied
print(f"Input: {input_sentence}")
print(f"Output:\n {generated_text[0]['generated_text']}")
```
|
silpakanneganti/bert-medical-ner
|
silpakanneganti
| 2023-06-19T07:39:55Z | 19 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-14T14:16:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-medical-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-medical-ner
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1905
- Precision: 0.6552
- Recall: 0.6965
- F1: 0.6752
- Accuracy: 0.7449
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 71 | 1.8255 | 0.3427 | 0.4460 | 0.3876 | 0.5555 |
| No log | 2.0 | 142 | 1.3139 | 0.4722 | 0.5703 | 0.5166 | 0.6442 |
| No log | 3.0 | 213 | 1.1147 | 0.5258 | 0.6029 | 0.5617 | 0.6886 |
| No log | 4.0 | 284 | 0.9873 | 0.5785 | 0.6151 | 0.5962 | 0.7048 |
| No log | 5.0 | 355 | 0.9282 | 0.6314 | 0.6558 | 0.6434 | 0.7312 |
| No log | 6.0 | 426 | 0.8760 | 0.642 | 0.6538 | 0.6478 | 0.7329 |
| No log | 7.0 | 497 | 0.8501 | 0.6608 | 0.6904 | 0.6753 | 0.7466 |
| 1.1706 | 8.0 | 568 | 0.8313 | 0.6791 | 0.7067 | 0.6926 | 0.7483 |
| 1.1706 | 9.0 | 639 | 0.8002 | 0.6616 | 0.7047 | 0.6824 | 0.7449 |
| 1.1706 | 10.0 | 710 | 0.8280 | 0.6640 | 0.6721 | 0.6680 | 0.7363 |
| 1.1706 | 11.0 | 781 | 0.8248 | 0.6594 | 0.6823 | 0.6707 | 0.7457 |
| 1.1706 | 12.0 | 852 | 0.7988 | 0.6610 | 0.7189 | 0.6888 | 0.7654 |
| 1.1706 | 13.0 | 923 | 0.8593 | 0.6587 | 0.6762 | 0.6673 | 0.7423 |
| 1.1706 | 14.0 | 994 | 0.8204 | 0.6719 | 0.6965 | 0.6840 | 0.7534 |
| 0.4317 | 15.0 | 1065 | 0.8478 | 0.6770 | 0.7128 | 0.6944 | 0.7526 |
| 0.4317 | 16.0 | 1136 | 0.8855 | 0.6610 | 0.7149 | 0.6869 | 0.7730 |
| 0.4317 | 17.0 | 1207 | 0.9091 | 0.6751 | 0.7067 | 0.6905 | 0.7560 |
| 0.4317 | 18.0 | 1278 | 0.9201 | 0.6555 | 0.7169 | 0.6848 | 0.7568 |
| 0.4317 | 19.0 | 1349 | 0.9840 | 0.6623 | 0.7189 | 0.6895 | 0.7483 |
| 0.4317 | 20.0 | 1420 | 0.9817 | 0.6833 | 0.7251 | 0.7036 | 0.7543 |
| 0.4317 | 21.0 | 1491 | 0.9958 | 0.6583 | 0.6945 | 0.6759 | 0.7509 |
| 0.2121 | 22.0 | 1562 | 0.9340 | 0.6647 | 0.7026 | 0.6832 | 0.7722 |
| 0.2121 | 23.0 | 1633 | 0.9906 | 0.6622 | 0.7108 | 0.6857 | 0.7619 |
| 0.2121 | 24.0 | 1704 | 1.0099 | 0.6692 | 0.7088 | 0.6884 | 0.7526 |
| 0.2121 | 25.0 | 1775 | 1.0627 | 0.6673 | 0.7189 | 0.6922 | 0.7662 |
| 0.2121 | 26.0 | 1846 | 1.0744 | 0.6584 | 0.7067 | 0.6817 | 0.7637 |
| 0.2121 | 27.0 | 1917 | 1.1328 | 0.6569 | 0.6864 | 0.6713 | 0.7389 |
| 0.2121 | 28.0 | 1988 | 1.0799 | 0.6641 | 0.7128 | 0.6876 | 0.7577 |
| 0.1201 | 29.0 | 2059 | 1.1156 | 0.6628 | 0.7047 | 0.6831 | 0.7568 |
| 0.1201 | 30.0 | 2130 | 1.0839 | 0.6628 | 0.6965 | 0.6792 | 0.75 |
| 0.1201 | 31.0 | 2201 | 1.1511 | 0.6526 | 0.6925 | 0.6719 | 0.7389 |
| 0.1201 | 32.0 | 2272 | 1.1140 | 0.6737 | 0.7149 | 0.6937 | 0.7543 |
| 0.1201 | 33.0 | 2343 | 1.1094 | 0.6609 | 0.6986 | 0.6792 | 0.7466 |
| 0.1201 | 34.0 | 2414 | 1.1332 | 0.6755 | 0.7251 | 0.6994 | 0.7534 |
| 0.1201 | 35.0 | 2485 | 1.1322 | 0.6841 | 0.7189 | 0.7011 | 0.7551 |
| 0.0776 | 36.0 | 2556 | 1.1603 | 0.6711 | 0.7189 | 0.6942 | 0.7551 |
| 0.0776 | 37.0 | 2627 | 1.1460 | 0.6504 | 0.7047 | 0.6764 | 0.7543 |
| 0.0776 | 38.0 | 2698 | 1.1387 | 0.6584 | 0.7067 | 0.6817 | 0.7577 |
| 0.0776 | 39.0 | 2769 | 1.1438 | 0.6641 | 0.7088 | 0.6857 | 0.7534 |
| 0.0776 | 40.0 | 2840 | 1.1791 | 0.6660 | 0.7149 | 0.6896 | 0.7577 |
| 0.0776 | 41.0 | 2911 | 1.1701 | 0.6641 | 0.7088 | 0.6857 | 0.75 |
| 0.0776 | 42.0 | 2982 | 1.1889 | 0.6615 | 0.6965 | 0.6786 | 0.7457 |
| 0.0571 | 43.0 | 3053 | 1.1810 | 0.6533 | 0.6945 | 0.6732 | 0.7449 |
| 0.0571 | 44.0 | 3124 | 1.1944 | 0.6577 | 0.6965 | 0.6766 | 0.7440 |
| 0.0571 | 45.0 | 3195 | 1.2032 | 0.6564 | 0.6925 | 0.6739 | 0.7432 |
| 0.0571 | 46.0 | 3266 | 1.2092 | 0.6609 | 0.6945 | 0.6773 | 0.7449 |
| 0.0571 | 47.0 | 3337 | 1.1864 | 0.6622 | 0.6986 | 0.6799 | 0.7466 |
| 0.0571 | 48.0 | 3408 | 1.1972 | 0.6538 | 0.6925 | 0.6726 | 0.7449 |
| 0.0571 | 49.0 | 3479 | 1.1899 | 0.6545 | 0.6945 | 0.6739 | 0.7449 |
| 0.0467 | 50.0 | 3550 | 1.1905 | 0.6552 | 0.6965 | 0.6752 | 0.7449 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AtomGradient/qa_model_inner_exper
|
AtomGradient
| 2023-06-19T07:37:35Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-19T07:34:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: qa_model_inner_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qa_model_inner_test
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7665
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.4554 |
| 2.7335 | 2.0 | 500 | 1.8712 |
| 2.7335 | 3.0 | 750 | 1.7665 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
AtomGradient/text_classification_inner_lab
|
AtomGradient
| 2023-06-19T07:21:43Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T07:07:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93208
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2302
- Accuracy: 0.9321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2258 | 1.0 | 1563 | 0.2223 | 0.9202 |
| 0.1543 | 2.0 | 3126 | 0.2302 | 0.9321 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Newbeder/Reinforce-Cart-pole
|
Newbeder
| 2023-06-19T07:04:53Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T07:04:39Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cart-pole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
JaeHwi/my_awesome_eli5_clm-model
|
JaeHwi
| 2023-06-19T07:01:46Z | 215 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-01T06:24:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8792 | 1.0 | 1131 | 3.7492 |
| 3.7711 | 2.0 | 2262 | 3.7329 |
| 3.7299 | 3.0 | 3393 | 3.7304 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
EarthnDusk/Poltergeist-Illustration
|
EarthnDusk
| 2023-06-19T06:47:45Z | 28 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable diffusion",
"anime",
"comic book",
"mix",
"merge",
"text-to-image",
"en",
"dataset:fka/awesome-chatgpt-prompts",
"dataset:Nerfgun3/bad_prompt",
"dataset:Duskfallcrew/remydataset",
"dataset:basesssssp/bad-kemono-negative-embedding",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-19T00:44:23Z |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
tags:
- stable diffusion
- diffusers
- anime
- comic book
- mix
- merge
datasets:
- fka/awesome-chatgpt-prompts
- Nerfgun3/bad_prompt
- Duskfallcrew/remydataset
- basesssssp/bad-kemono-negative-embedding
pipeline_tag: text-to-image
---
## POODA-BEEP!
This is censored language for POLTERBITCH aka Poltergeist.
It's an in house nod to some of our alter's truths, and it's kind of a joke for Beetlejuice fans.
THIS IS AN ILLUSTRATION - COMIC MIX and there's several versions of this and you'll note that there is only ONE DEMO SPACE FOR IT ON HF so far - but give us time we're working on it!
---
## We are partly sponsored by Pirate diffusion, and are waiting on links and images.
REQUESTS FOR LORAS AN MERGES: https://forms.gle/tAgb8RsC8mf1scV48
---
## HOW TO SUPPORT US:
Join our Reddit: https://www.reddit.com/r/earthndusk/
If you got requests, or concerns, We're still looking for beta testers: JOIN THE DISCORD AND DEMAND THINGS OF US: https://discord.gg/5t2kYxt7An
Listen to the music that we've made that goes with our art: https://open.spotify.com/playlist/00R8x00YktB4u541imdSSf?si=b60d209385a74b38
We stream a lot of our testing on twitch: https://www.twitch.tv/duskfallcrew
any chance you can spare a coffee or three? https://ko-fi.com/DUSKFALLcrew
If SOMEHOW CIVIT ISN'T WORKING WE WILL ALWAYS HAVE A BACKUP: https://huggingface.co/Duskfallcrew/
Submit SFW and amazing wallpapers at PaprGG: https://discord.gg/2UXkGwndVE
---
## STOP
We used AFTER DETAILER AND HI RES FIX.
Upscaler Choices:
https://huggingface.co/uwg/upscaler/tree/main/ESRGAN
Vae Alternatives:
https://huggingface.co/datasets/VASVASVAS/vae
---
## MIX BREAK DOWN
This is literally as far as I can tell just poodabeep and epic v4 with lora binding.
Because it's a comic book style model, it likely has MANY of Lykon's comic loras in it.
Marvels and Dungeons
Largely lost the OG list for it...
https://civitai.com/models/71404/mcbs-machinecodes-comic-book-style
plus more iComix
and Duel Comic Strike
---
## LEGAL RESPONSIBILITY DOWNSTREAM
You are legally respoonsible for YOUR use of this model and it's downstream uses. We highly request you don't do anything illegal, morally incorrect or anything that goes against the Creative Open-Rail M details.
You are FREE to add this to GENERATION sites as long as you link back to the civit AI page here: https://civitai.com/models/27096/epic-mix-v4
You are WITHIN reason allowed to make commercial use images, but as always check your local laws, and of course - do not use this to create works that denote that it is NOT Ai generative media. If you're working on a larger project please consider supporting us financially.
|
giantsol/hansol_10_512_lora
|
giantsol
| 2023-06-19T06:28:20Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-19T06:21:34Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: lvl person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - giantsol/hansol_10_512_lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on lvl person using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: True.
|
songyulong/LinartV2.0
|
songyulong
| 2023-06-19T06:26:53Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-06-19T06:25:29Z |
---
license: bigscience-openrail-m
---
|
gammatau/santacoder-ts-fim
|
gammatau
| 2023-06-19T06:16:52Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-05T23:42:20Z |
# SantaCoder Trained For Fill-In-The-Type
More information here: https://github.com/gammaTauAI/santacoder-finetuning
|
Basit34/cv_parser
|
Basit34
| 2023-06-19T06:14:44Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-06-19T05:28:21Z |
---
title: Cv Parser
emoji: 💩
colorFrom: green
colorTo: red
sdk: gradio
app_file: app.py
pinned: false
license: mit
---
# Configuration
`title`: _string_
Display title for the Space
`emoji`: _string_
Space emoji (emoji-only character allowed)
`colorFrom`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`colorTo`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`sdk`: _string_
Can be either `gradio`, `streamlit`, or `static`
`sdk_version` : _string_
Only applicable for `streamlit` SDK.
See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
`app_file`: _string_
Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
Path is relative to the root of the repository.
`models`: _List[string]_
HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space.
Will be parsed automatically from your code if not specified here.
`datasets`: _List[string]_
HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space.
Will be parsed automatically from your code if not specified here.
`pinned`: _boolean_
Whether the Space stays on top of your list.
|
gurugaurav/lilt-en-funsd
|
gurugaurav
| 2023-06-19T06:08:05Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"lilt",
"token-classification",
"generated_from_trainer",
"dataset:funsd-layoutlmv3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-19T05:26:52Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- funsd-layoutlmv3
model-index:
- name: lilt-en-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lilt-en-funsd
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5496
- Answer: {'precision': 0.875, 'recall': 0.9253365973072215, 'f1': 0.8994646044021416, 'number': 817}
- Header: {'precision': 0.6276595744680851, 'recall': 0.4957983193277311, 'f1': 0.5539906103286385, 'number': 119}
- Question: {'precision': 0.9049360146252285, 'recall': 0.9192200557103064, 'f1': 0.9120221096269001, 'number': 1077}
- Overall Precision: 0.8796
- Overall Recall: 0.8967
- Overall F1: 0.8881
- Overall Accuracy: 0.8134
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.434 | 10.53 | 200 | 1.0227 | {'precision': 0.8357705286839145, 'recall': 0.9094247246022031, 'f1': 0.8710433763188746, 'number': 817} | {'precision': 0.7058823529411765, 'recall': 0.40336134453781514, 'f1': 0.5133689839572192, 'number': 119} | {'precision': 0.8683522231909329, 'recall': 0.924791086350975, 'f1': 0.89568345323741, 'number': 1077} | 0.8493 | 0.8877 | 0.8681 | 0.7935 |
| 0.0484 | 21.05 | 400 | 1.3626 | {'precision': 0.8098360655737705, 'recall': 0.9069767441860465, 'f1': 0.8556581986143187, 'number': 817} | {'precision': 0.6086956521739131, 'recall': 0.47058823529411764, 'f1': 0.5308056872037914, 'number': 119} | {'precision': 0.8613333333333333, 'recall': 0.8997214484679665, 'f1': 0.8801089918256131, 'number': 1077} | 0.8283 | 0.8773 | 0.8521 | 0.7995 |
| 0.0168 | 31.58 | 600 | 1.3003 | {'precision': 0.8440046565774156, 'recall': 0.8873929008567931, 'f1': 0.8651551312649164, 'number': 817} | {'precision': 0.6421052631578947, 'recall': 0.5126050420168067, 'f1': 0.5700934579439252, 'number': 119} | {'precision': 0.8776595744680851, 'recall': 0.9192200557103064, 'f1': 0.8979591836734694, 'number': 1077} | 0.8530 | 0.8823 | 0.8674 | 0.8189 |
| 0.008 | 42.11 | 800 | 1.3225 | {'precision': 0.8584795321637427, 'recall': 0.8984088127294981, 'f1': 0.8779904306220095, 'number': 817} | {'precision': 0.5736434108527132, 'recall': 0.6218487394957983, 'f1': 0.596774193548387, 'number': 119} | {'precision': 0.888468809073724, 'recall': 0.872794800371402, 'f1': 0.8805620608899298, 'number': 1077} | 0.8560 | 0.8684 | 0.8621 | 0.8210 |
| 0.0059 | 52.63 | 1000 | 1.6362 | {'precision': 0.8307522123893806, 'recall': 0.9192166462668299, 'f1': 0.8727484020918072, 'number': 817} | {'precision': 0.6419753086419753, 'recall': 0.4369747899159664, 'f1': 0.52, 'number': 119} | {'precision': 0.8944444444444445, 'recall': 0.8969359331476323, 'f1': 0.8956884561891516, 'number': 1077} | 0.8567 | 0.8788 | 0.8676 | 0.8061 |
| 0.0027 | 63.16 | 1200 | 1.6927 | {'precision': 0.8269858541893362, 'recall': 0.9302325581395349, 'f1': 0.8755760368663594, 'number': 817} | {'precision': 0.6046511627906976, 'recall': 0.4369747899159664, 'f1': 0.5073170731707317, 'number': 119} | {'precision': 0.9000925069380203, 'recall': 0.903435468895079, 'f1': 0.901760889712697, 'number': 1077} | 0.8557 | 0.8867 | 0.8709 | 0.7939 |
| 0.002 | 73.68 | 1400 | 1.4609 | {'precision': 0.8479467258601554, 'recall': 0.9351285189718482, 'f1': 0.889406286379511, 'number': 817} | {'precision': 0.5726495726495726, 'recall': 0.5630252100840336, 'f1': 0.5677966101694915, 'number': 119} | {'precision': 0.8917431192660551, 'recall': 0.9025069637883009, 'f1': 0.8970927549607752, 'number': 1077} | 0.8553 | 0.8957 | 0.8750 | 0.7965 |
| 0.0012 | 84.21 | 1600 | 1.4851 | {'precision': 0.865909090909091, 'recall': 0.9326805385556916, 'f1': 0.8980553918680023, 'number': 817} | {'precision': 0.6074766355140186, 'recall': 0.5462184873949579, 'f1': 0.575221238938053, 'number': 119} | {'precision': 0.9008341056533827, 'recall': 0.9025069637883009, 'f1': 0.901669758812616, 'number': 1077} | 0.8708 | 0.8937 | 0.8821 | 0.8131 |
| 0.0006 | 94.74 | 1800 | 1.5228 | {'precision': 0.850613154960981, 'recall': 0.9339045287637698, 'f1': 0.8903150525087514, 'number': 817} | {'precision': 0.594059405940594, 'recall': 0.5042016806722689, 'f1': 0.5454545454545453, 'number': 119} | {'precision': 0.896709323583181, 'recall': 0.9108635097493036, 'f1': 0.9037309995393827, 'number': 1077} | 0.8623 | 0.8962 | 0.8789 | 0.8082 |
| 0.0004 | 105.26 | 2000 | 1.5287 | {'precision': 0.867579908675799, 'recall': 0.9302325581395349, 'f1': 0.8978145304193739, 'number': 817} | {'precision': 0.6222222222222222, 'recall': 0.47058823529411764, 'f1': 0.5358851674641149, 'number': 119} | {'precision': 0.8917710196779964, 'recall': 0.9257195914577531, 'f1': 0.9084282460136676, 'number': 1077} | 0.8700 | 0.9006 | 0.8850 | 0.8128 |
| 0.0003 | 115.79 | 2200 | 1.5306 | {'precision': 0.8766006984866124, 'recall': 0.9216646266829865, 'f1': 0.8985680190930787, 'number': 817} | {'precision': 0.6263736263736264, 'recall': 0.4789915966386555, 'f1': 0.5428571428571428, 'number': 119} | {'precision': 0.8902765388046388, 'recall': 0.9266480965645311, 'f1': 0.908098271155596, 'number': 1077} | 0.8730 | 0.8982 | 0.8854 | 0.8127 |
| 0.0001 | 126.32 | 2400 | 1.5496 | {'precision': 0.875, 'recall': 0.9253365973072215, 'f1': 0.8994646044021416, 'number': 817} | {'precision': 0.6276595744680851, 'recall': 0.4957983193277311, 'f1': 0.5539906103286385, 'number': 119} | {'precision': 0.9049360146252285, 'recall': 0.9192200557103064, 'f1': 0.9120221096269001, 'number': 1077} | 0.8796 | 0.8967 | 0.8881 | 0.8134 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
2022happy/swin-tiny-patch4-window7-224-pruned-0.3-finetuned-eurosat
|
2022happy
| 2023-06-19T04:39:06Z | 244 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:cifar10",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-19T04:03:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cifar10
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-pruned-0.3-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: cifar10
type: cifar10
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.815
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-pruned-0.3-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the cifar10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5344
- Accuracy: 0.815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3059 | 1.0 | 351 | 0.9793 | 0.6582 |
| 0.9568 | 2.0 | 703 | 0.6321 | 0.7836 |
| 0.8607 | 2.99 | 1053 | 0.5344 | 0.815 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
musralina/helsinki-opus-de-en-fine-tuned-wmt16-finetuned-src-to-trg
|
musralina
| 2023-06-19T04:19:54Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-11T11:24:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: helsinki-opus-de-en-fine-tuned-wmt16-finetuned-src-to-trg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# helsinki-opus-de-en-fine-tuned-wmt16-finetuned-src-to-trg
This model is a fine-tuned version of [mariav/helsinki-opus-de-en-fine-tuned-wmt16](https://huggingface.co/mariav/helsinki-opus-de-en-fine-tuned-wmt16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8597
- Rouge1: 64.539
- Rouge2: 32.7634
- Rougel: 61.3523
- Rougelsum: 61.3758
- Gen Len: 23.9561
- Bleu-1: 64.1391
- Bleu-2: 45.1093
- Bleu-3: 32.4697
- Bleu-4: 24.2684
- Meteor: 0.5436
## Model description
This model is a fine-tuned version of mariav/helsinki-opus-de-en-fine-tuned-wmt16 on Phoenix Weather dataset (PHOENIX-2014-T).
## Intended uses & limitations
The purpose is Neural Machine Translation from German text into German Sign Glosses, which could be used for avatar generation within the Sign Language Production task.
## Training and evaluation data
Phoenix Weather dataset (PHOENIX-2014-T)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 7575
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|:-------:|:-------:|:-------:|:------:|
| 1.1513 | 1.0 | 1189 | 0.9604 | 61.8236 | 30.0156 | 58.9651 | 58.9484 | 22.8563 | 58.6480 | 40.5508 | 29.0090 | 21.2884 | 0.4961 |
| 0.9067 | 2.0 | 2378 | 0.8825 | 62.8824 | 30.8604 | 59.9543 | 59.9884 | 22.7564 | 60.5598 | 42.0443 | 29.9532 | 21.8711 | 0.5138 |
| 0.739 | 3.0 | 3567 | 0.8547 | 63.8251 | 31.6294 | 60.7141 | 60.7508 | 24.5219 | 62.6847 | 43.6395 | 31.1174 | 22.8704 | 0.5318 |
| 0.636 | 4.0 | 4756 | 0.8554 | 64.5308 | 32.6897 | 61.347 | 61.3929 | 22.7912 | 63.0309 | 44.4786 | 32.0956 | 23.8647 | 0.5369 |
| 0.5745 | 5.0 | 5945 | 0.8597 | 64.539 | 32.7634 | 61.3523 | 61.3758 | 23.9561 | 64.1391 | 45.1093 | 32.4697 | 24.2684 | 0.5436 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
nolanaatama/tkfrmtkyghlddrvc200pchsxht
|
nolanaatama
| 2023-06-19T04:11:50Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-19T03:59:38Z |
---
license: creativeml-openrail-m
---
|
timjwhite/Pyramids
|
timjwhite
| 2023-06-19T03:46:20Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-19T03:46:11Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: timjwhite/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
twidfeel/xlm-roberta-base-finetuned-panx-de
|
twidfeel
| 2023-06-19T03:23:34Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-19T03:13:47Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8608314849570609
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1378
- F1: 0.8608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.258 | 1.0 | 525 | 0.1619 | 0.8168 |
| 0.1295 | 2.0 | 1050 | 0.1357 | 0.8468 |
| 0.0827 | 3.0 | 1575 | 0.1378 | 0.8608 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
AhsanZaidi/Reinforce-PixelCopter
|
AhsanZaidi
| 2023-06-19T03:07:40Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T03:07:30Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 60.10 +/- 45.96
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
TheFools/Celinne
|
TheFools
| 2023-06-19T02:50:34Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-19T02:48:28Z |
---
license: creativeml-openrail-m
---
|
timjwhite/ppo-SnowballTarget
|
timjwhite
| 2023-06-19T02:42:15Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-18T06:19:56Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: timjwhite/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
openaccess-ai-collective/minotaur-15b
|
openaccess-ai-collective
| 2023-06-19T02:41:57Z | 9 | 15 |
transformers
|
[
"transformers",
"pytorch",
"gpt_bigcode",
"text-generation",
"code",
"dataset:bigcode/the-stack-dedup",
"dataset:tiiuae/falcon-refinedweb",
"dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered",
"dataset:QingyiSi/Alpaca-CoT",
"dataset:teknium/GPTeacher-General-Instruct",
"dataset:metaeval/ScienceQA_text_only",
"dataset:hellaswag",
"dataset:openai/summarize_from_feedback",
"dataset:riddle_sense",
"dataset:gsm8k",
"dataset:camel-ai/math",
"dataset:camel-ai/biology",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:winglian/evals",
"arxiv:1911.02150",
"arxiv:2205.14135",
"arxiv:2207.14255",
"arxiv:2305.06161",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-16T17:19:19Z |
---
pipeline_tag: text-generation
inference: true
widget:
- text: 'def print_hello_world():'
example_title: Hello world
group: Python
- text: 'Gradient descent is'
example_title: Machine Learning
group: English
- license: bigcode-openrail-m
datasets:
- bigcode/the-stack-dedup
- tiiuae/falcon-refinedweb
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
- QingyiSi/Alpaca-CoT
- teknium/GPTeacher-General-Instruct
- metaeval/ScienceQA_text_only
- hellaswag
- openai/summarize_from_feedback
- riddle_sense
- gsm8k
- camel-ai/math
- camel-ai/biology
- camel-ai/physics
- camel-ai/chemistry
- winglian/evals
metrics:
- code_eval
- mmlu
- arc
- hellaswag
- truthfulqa
library_name: transformers
tags:
- code
extra_gated_prompt: >-
## Model License Agreement
Please read the BigCode [OpenRAIL-M
license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
agreement before accepting it.
extra_gated_fields:
I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
**[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!**
# Minotaur 15B 8K
Minotaur 15B is an instruct fine-tuned model on top of Starcoder Plus. Minotaur 15B is fine-tuned **on only completely open datasets** making this model reproducible by anyone.
Minotaur 15B has a context length of 8K tokens, allowing for strong recall at long contexts.
Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [[email protected]](mailto:[email protected])
# Prompts
Chat only style prompts using `USER:`,`ASSISTANT:`.
<img src="https://huggingface.co/openaccess-ai-collective/minotaur-13b/resolve/main/minotaur.png" alt="minotaur" width="600" height="600"/>
# Training Datasets
Minotaur 15B model is fine-tuned on the following openly available datasets:
- [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
- [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
- [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
- [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
- [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
- [camel-ai/math](https://huggingface.co/datasets/camel-ai/math)
- [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics)
- [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry)
- [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology)
- [winglian/evals](https://huggingface.co/datasets/winglian/evals) - instruct augmented datasets
- custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity
- ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split
- [hellaswag](https://huggingface.co/datasets/hellaswag) - 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the `train` split
- [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented, derived from the `train` split
- [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented, derived from the `train` split
- prose generation
# Shoutouts
Special thanks to Nanobit for helping with Axolotl and TheBloke for quantizing these models are more accessible to all.
# Demo
HF Demo in Spaces available in the [Community ChatBot Arena](https://huggingface.co/spaces/openaccess-ai-collective/rlhf-arena) under the OAAIC Chatbots tab.
## Release Notes
- https://wandb.ai/wing-lian/minotaur-16b-8k/runs/tshgbl2k
## Build
Minotaur was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 4XA100 80GB
- 1 epochs taking approximately 30 hours
- Trained using QLoRA techniques
## Bias, Risks, and Limitations
Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Minotaur was fine-tuned from the base model StarCoder, please refer to its model card's Limitations Section for relevant information. (included below)
## Benchmarks
TBD
## Examples
TBD
# StarCoderPlus
Play with the instruction-tuned StarCoderPlus at [StarChat-Beta](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground).
## Table of Contents
1. [Model Summary](##model-summary)
2. [Use](##use)
3. [Limitations](##limitations)
4. [Training](##training)
5. [License](##license)
6. [Citation](##citation)
## Model Summary
StarCoderPlus is a fine-tuned version of [StarCoderBase](https://huggingface.co/bigcode/starcoderbase) on 600B tokens from the English web dataset [RedefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
combined with [StarCoderData](https://huggingface.co/datasets/bigcode/starcoderdata) from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack) and a Wikipedia dataset.
It's a 15.5B parameter Language Model trained on English and 80+ programming languages. The model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150),
[a context window of 8192 tokens](https://arxiv.org/abs/2205.14135), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 1.6 trillion tokens.
- **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
- **Point of Contact:** [[email protected]](mailto:[email protected])
- **Languages:** English & 80+ Programming languages
## Use
### Intended use
The model was trained on English and GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. However, the instruction-tuned version in [StarChat](hhttps://huggingface.co/spaces/HuggingFaceH4/starchat-playground) makes a capable assistant.
**Feel free to share your generations in the Community tab!**
### Generation
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/starcoderplus"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Fill-in-the-middle
Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:
```python
input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>"
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Attribution & Other Requirements
The training code dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/starcoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
# Limitations
The model has been trained on a mixture of English text from the web and GitHub code. Therefore it might encounter limitations when working with non-English text, and can carry the stereotypes and biases commonly encountered online.
Additionally, the generated code should be used with caution as it may contain errors, inefficiencies, or potential vulnerabilities. For a more comprehensive understanding of the base model's code limitations, please refer to See [StarCoder paper](hhttps://arxiv.org/abs/2305.06161).
# Training
StarCoderPlus is a fine-tuned version on 600B English and code tokens of StarCoderBase, which was pre-trained on 1T code tokens. Below are the fine-tuning details:
## Model
- **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
- **Finetuning steps:** 150k
- **Finetuning tokens:** 600B
- **Precision:** bfloat16
## Hardware
- **GPUs:** 512 Tesla A100
- **Training time:** 14 days
## Software
- **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# License
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
|
pooruss-lsh/tool-llama7b-multi-tool-lora
|
pooruss-lsh
| 2023-06-19T02:34:46Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-06-19T02:29:18Z |
---
license: apache-2.0
---
# Model Card for Model ID
This is a lora version tool-llama model introduced in [ToolBench](https://github.com/OpenBMB/ToolBench) under multi-tool scenario.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **License:** apache-2.0
- **Finetuned from model [optional]:** LLaMA-7b
## Uses
Refer to [ToolBench](https://github.com/OpenBMB/ToolBench).
## Training Details
Trained with the released multi-tool data in ToolBench, including 3 scenarios.
|
echrisantus/taxi-v3
|
echrisantus
| 2023-06-19T02:04:09Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T01:53:31Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="echrisantus/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
echrisantus/q-FrozenLake-v1-4x4-noSlippery
|
echrisantus
| 2023-06-19T02:03:11Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T01:50:36Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="echrisantus/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
phi0112358/Zicklein-7B-german_Alpaca-ggml
|
phi0112358
| 2023-06-19T01:57:43Z | 0 | 4 | null |
[
"llama",
"alpaca",
"ggml",
"german",
"deutsch",
"zicklein",
"de",
"dataset:yahma/alpaca-cleaned",
"license:apache-2.0",
"region:us"
] | null | 2023-06-18T23:54:37Z |
---
license: apache-2.0
datasets:
- yahma/alpaca-cleaned
language:
- de
tags:
- llama
- alpaca
- ggml
- german
- deutsch
- zicklein
---
# ---
---
# Zicklein: A german finetuned instructions following LLaMA
## This is a ggml conversion of [Zicklein](https://github.com/avocardio/zicklein) 7B.
## Zicklein itself is a LLaMA finetuned model with a cleaned and german translated [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) [dataset](https://github.com/LEL-A/GerAlpacaDataCleaned).
Currently I have only converted it into **new k-quant method Q5_K_M**. I will gladly make more versions on request.
Other possible quantizations include: q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q5_K_M, q6_K
A f-16 version could be found here: [nikuya3/alpaca-lora-7b-german-base-51k-ggml](https://huggingface.co/nikuya3/alpaca-lora-7b-german-base-51k-ggml)
Compatible with **llama.cpp**, but also with:
- **text-generation-webui**
- **KoboldCpp**
- **ParisNeo/GPT4All-UI**
- **llama-cpp-python**
- **ctransformers**
---
## Prompt format
Since this model is based on alpaca dataset, the right prompt formatting should look like this:
```
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Input:
{input}
### Response:
```
Or **without** addiotional **input**:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
```
---
### That's it!
If you have any further questions, feel free to contact me or start a discussion
|
openaccess-ai-collective/dodona-pyg-v8p4-15b-preview
|
openaccess-ai-collective
| 2023-06-19T01:49:14Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_bigcode",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-18T21:50:37Z |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
**[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!**
# Dodona Pyg 15B 8K Preview
This finetune adds about 100MB of the v8p4 dataset to Dodona 15B 8K.
Dodona 15B 8K Preview is an experiment for fan-fiction and character ai use cases. It is built on Starcoder Plus to give it 8K context length and pretrained on a corpus of fanfiction and visual novels.
Lots of mistakes were made during the creation of this model, but we didn't want to throw $300 of model training time out the window, so we are releasing this as a preview.
If you would like to see us continue to build more models like this, please consider donating by sponsoring us on GitHub on the link above or [Buy me a coffee](https://www.buymeacoffee.com/winglian).
Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [[email protected]](mailto:[email protected])
# Prompts
While this model is minimally finetuned with USER: / ASSISTANT: prompts, it seems to respond better to Alpaca style prompts:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
...
### Response:
```
<img src="https://huggingface.co/openaccess-ai-collective/dodona-15b-preview/resolve/main/dodona.png" alt="oracle of dodona" width="600" height="500"/>
|
Calluna/Asuna
|
Calluna
| 2023-06-19T01:47:54Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-19T01:47:54Z |
---
license: creativeml-openrail-m
---
|
NasimB/bert-concat
|
NasimB
| 2023-06-19T01:36:48Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:generator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-18T20:31:41Z |
---
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: bert-concat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-concat
This model is a fine-tuned version of [](https://huggingface.co/) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 14
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 7.3397 | 0.25 | 500 | 6.6405 |
| 6.5835 | 0.51 | 1000 | 6.5183 |
| 6.4967 | 0.76 | 1500 | 6.4926 |
| 6.451 | 1.01 | 2000 | 6.4507 |
| 6.4104 | 1.26 | 2500 | 6.4097 |
| 6.3868 | 1.52 | 3000 | 6.4019 |
| 6.3717 | 1.77 | 3500 | 6.3789 |
| 6.3361 | 2.02 | 4000 | 6.3596 |
| 6.3099 | 2.28 | 4500 | 6.3345 |
| 6.2807 | 2.53 | 5000 | 6.3050 |
| 6.2578 | 2.78 | 5500 | 6.2843 |
| 6.2356 | 3.03 | 6000 | 6.2735 |
| 6.2017 | 3.29 | 6500 | 6.2527 |
| 6.1837 | 3.54 | 7000 | 6.2277 |
| 6.1682 | 3.79 | 7500 | 6.2102 |
| 6.1443 | 4.04 | 8000 | 6.1917 |
| 6.1128 | 4.3 | 8500 | 6.1767 |
| 6.1034 | 4.55 | 9000 | 6.1678 |
| 6.0838 | 4.8 | 9500 | 6.1552 |
| 6.0641 | 5.06 | 10000 | 6.1401 |
| 6.0417 | 5.31 | 10500 | 6.1350 |
| 6.0247 | 5.56 | 11000 | 6.1123 |
| 6.0125 | 5.81 | 11500 | 6.1082 |
| 6.0028 | 6.07 | 12000 | 6.1022 |
| 5.9788 | 6.32 | 12500 | 6.0895 |
| 5.9739 | 6.57 | 13000 | 6.0828 |
| 5.9545 | 6.83 | 13500 | 6.0687 |
| 5.9441 | 7.08 | 14000 | 6.0652 |
| 5.923 | 7.33 | 14500 | 6.0567 |
| 5.9115 | 7.58 | 15000 | 6.0492 |
| 5.9106 | 7.84 | 15500 | 6.0466 |
| 5.8943 | 8.09 | 16000 | 6.0315 |
| 5.8726 | 8.34 | 16500 | 6.0339 |
| 5.8665 | 8.59 | 17000 | 6.0243 |
| 5.8548 | 8.85 | 17500 | 6.0193 |
| 5.8431 | 9.1 | 18000 | 6.0111 |
| 5.8218 | 9.35 | 18500 | 6.0053 |
| 5.8193 | 9.61 | 19000 | 6.0026 |
| 5.8174 | 9.86 | 19500 | 5.9927 |
| 5.7954 | 10.11 | 20000 | 5.9873 |
| 5.7779 | 10.36 | 20500 | 5.9823 |
| 5.7749 | 10.62 | 21000 | 5.9799 |
| 5.7739 | 10.87 | 21500 | 5.9784 |
| 5.7582 | 11.12 | 22000 | 5.9757 |
| 5.7415 | 11.38 | 22500 | 5.9686 |
| 5.7467 | 11.63 | 23000 | 5.9650 |
| 5.7448 | 11.88 | 23500 | 5.9648 |
| 5.7372 | 12.13 | 24000 | 5.9585 |
| 5.7207 | 12.39 | 24500 | 5.9596 |
| 5.7264 | 12.64 | 25000 | 5.9546 |
| 5.7212 | 12.89 | 25500 | 5.9516 |
| 5.7142 | 13.14 | 26000 | 5.9553 |
| 5.7103 | 13.4 | 26500 | 5.9551 |
| 5.7093 | 13.65 | 27000 | 5.9527 |
| 5.7183 | 13.9 | 27500 | 5.9507 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
andreac94/finetuning-sentiment-model-amazonbaby5000
|
andreac94
| 2023-06-19T01:35:03Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T01:03:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuning-sentiment-model-amazonbaby5000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-amazonbaby5000
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2733
- Accuracy: 0.9024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
nolanaatama/nylr
|
nolanaatama
| 2023-06-19T01:20:48Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-14T07:28:35Z |
---
license: creativeml-openrail-m
---
|
FALLENSTAR/JSR_style_LoRa
|
FALLENSTAR
| 2023-06-19T00:37:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-18T09:38:49Z |
Inspired by the streets of Tokyo-to
Dedicated to Jet Set Radio
I decided to make a Jet Set Radio style LoRa
Works well with the characters as well as everything else
imo best settings:
Steps: 25 Sampler: DPM++ SDE Karras, Euler a, CFG scale: 6.5-11 and with LoRa strength 1




|
ardhies/mej
|
ardhies
| 2023-06-19T00:34:54Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-19T00:33:26Z |
---
license: creativeml-openrail-m
---
|
timjwhite/Reinforce-Pixelcopter-PLE-v0
|
timjwhite
| 2023-06-19T00:27:45Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-18T04:07:28Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 26.90 +/- 16.23
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
hts98/whisper-medium-paper
|
hts98
| 2023-06-19T00:05:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-17T16:52:26Z |
---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-medium-paper
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-paper
This model is a fine-tuned version of [hts98/whisper-medium-paper](https://huggingface.co/hts98/whisper-medium-paper) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4741
- Wer: 38.0826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 13
- eval_batch_size: 13
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.0 | 264 | 0.3606 | 39.1061 |
| 0.1341 | 2.0 | 528 | 0.3903 | 38.9291 |
| 0.1341 | 3.0 | 792 | 0.4337 | 39.0249 |
| 0.0469 | 4.0 | 1056 | 0.4426 | 38.3089 |
| 0.0469 | 5.0 | 1320 | 0.4741 | 38.0826 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
|
NasimB/gpt2_left_out_open_subtitles
|
NasimB
| 2023-06-18T23:58:42Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-18T21:32:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2_left_out_open_subtitles
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_left_out_open_subtitles
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.353 | 0.35 | 500 | 5.4389 |
| 5.0517 | 0.71 | 1000 | 4.9981 |
| 4.6553 | 1.06 | 1500 | 4.7472 |
| 4.3892 | 1.41 | 2000 | 4.5957 |
| 4.258 | 1.77 | 2500 | 4.4664 |
| 4.0843 | 2.12 | 3000 | 4.3902 |
| 3.922 | 2.48 | 3500 | 4.3282 |
| 3.89 | 2.83 | 4000 | 4.2537 |
| 3.713 | 3.18 | 4500 | 4.2408 |
| 3.6155 | 3.54 | 5000 | 4.2007 |
| 3.6227 | 3.89 | 5500 | 4.1522 |
| 3.4072 | 4.24 | 6000 | 4.1829 |
| 3.3651 | 4.6 | 6500 | 4.1499 |
| 3.3798 | 4.95 | 7000 | 4.1168 |
| 3.1038 | 5.3 | 7500 | 4.1744 |
| 3.1121 | 5.66 | 8000 | 4.1606 |
| 3.1104 | 6.01 | 8500 | 4.1563 |
| 2.8101 | 6.36 | 9000 | 4.2017 |
| 2.8457 | 6.72 | 9500 | 4.1979 |
| 2.7947 | 7.07 | 10000 | 4.2241 |
| 2.5839 | 7.43 | 10500 | 4.2544 |
| 2.6024 | 7.78 | 11000 | 4.2558 |
| 2.5182 | 8.13 | 11500 | 4.2838 |
| 2.41 | 8.49 | 12000 | 4.2963 |
| 2.416 | 8.84 | 12500 | 4.3009 |
| 2.3609 | 9.19 | 13000 | 4.3117 |
| 2.3182 | 9.55 | 13500 | 4.3138 |
| 2.3187 | 9.9 | 14000 | 4.3147 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
minoosh/videomae-base-finetuned-IEMOCAP_4
|
minoosh
| 2023-06-18T23:55:50Z | 60 | 0 |
transformers
|
[
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-06-18T19:02:34Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-IEMOCAP_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-IEMOCAP_4
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3971
- Accuracy: 0.2747
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 4490
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3364 | 0.1 | 450 | 1.4142 | 0.2882 |
| 1.3951 | 1.1 | 900 | 1.3692 | 0.3058 |
| 1.2918 | 2.1 | 1350 | 1.3544 | 0.3357 |
| 1.2283 | 3.1 | 1800 | 1.3673 | 0.3298 |
| 1.2638 | 4.1 | 2250 | 1.3652 | 0.3404 |
| 1.2674 | 5.1 | 2700 | 1.3265 | 0.3538 |
| 1.2737 | 6.1 | 3150 | 1.3092 | 0.3802 |
| 1.1625 | 7.1 | 3600 | 1.2969 | 0.3884 |
| 1.35 | 8.1 | 4050 | 1.3067 | 0.3726 |
| 1.1373 | 9.1 | 4490 | 1.2835 | 0.3972 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
ngkuissi/Pixelcopter-PLE-v0
|
ngkuissi
| 2023-06-18T23:53:44Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-10T19:03:24Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 45.70 +/- 42.58
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Hodurr/Sarah
|
Hodurr
| 2023-06-18T23:49:11Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-18T23:42:52Z |
---
license: creativeml-openrail-m
---
|
minoosh/videomae-base-finetuned-IEMOCAP_3
|
minoosh
| 2023-06-18T23:15:11Z | 59 | 0 |
transformers
|
[
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-06-18T18:30:01Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-IEMOCAP_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-IEMOCAP_3
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3701
- Accuracy: 0.3179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 4370
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3648 | 0.1 | 438 | 1.3661 | 0.3746 |
| 1.3188 | 1.1 | 876 | 1.3436 | 0.3215 |
| 1.2826 | 2.1 | 1314 | 1.3651 | 0.3215 |
| 1.2141 | 3.1 | 1752 | 1.3170 | 0.3759 |
| 1.2555 | 4.1 | 2190 | 1.3018 | 0.4053 |
| 1.2422 | 5.1 | 2628 | 1.3036 | 0.3565 |
| 1.2776 | 6.1 | 3066 | 1.2650 | 0.3921 |
| 1.2074 | 7.1 | 3504 | 1.2655 | 0.3971 |
| 1.2354 | 8.1 | 3942 | 1.2542 | 0.3952 |
| 1.0865 | 9.1 | 4370 | 1.2498 | 0.4078 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
ac8736/toxic-tweets-fine-tuned-distilbert
|
ac8736
| 2023-06-18T23:12:27Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-21T04:45:30Z |
---
license: mit
---
Multilabel classification model trained on the Toxic Comments dataset from Kaggle. (https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge/data)
Fine tuned using DistilBert.
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
model = AutoModelForSequenceClassification.from_pretrained("pretrained_model")
tokenizer = AutoTokenizer.from_pretrained("model_tokenizer")
X_train = ["Why is Owen's retirement from football not mentioned? He hasn't played a game since 2005."]
batch = tokenizer(X_train, truncation=True, padding='max_length', return_tensors="pt")
labels = ["toxic", "severe_toxic", "obscene", "threat", "insult", "identity_hate"]
with torch.no_grad():
outputs = model(**batch)
predictions = torch.sigmoid(outputs.logits)*100
probs = predictions[0].tolist()
for i in range(len(probs)):
print(f"{labels[i]}: {round(probs[i], 3)}%")
```
Output expected below:
```
toxic: 0.676%
severe_toxic: 0.001%
obscene: 0.098%
threat: 0.007%
insult: 0.021%
identity_hate: 0.004%
```
|
Franx73/1
|
Franx73
| 2023-06-18T22:51:50Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-18T22:49:19Z |
---
license: creativeml-openrail-m
---
|
BenjaminOcampo/model-bert__trained-in-dynahate__seed-0
|
BenjaminOcampo
| 2023-06-18T22:23:26Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T22:22:19Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-bert__trained-in-dynahate__seed-0
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.8120 0.8153 0.8136 1933
1 0.8346 0.8316 0.8331 2167
accuracy 0.8239 4100
macro avg 0.8233 0.8234 0.8234 4100
weighted avg 0.8239 0.8239 0.8239 4100
```
**Classification results test set**
```
precision recall f1-score support
0 0.7508 0.7759 0.7631 1852
1 0.8119 0.7897 0.8006 2268
accuracy 0.7835 4120
macro avg 0.7813 0.7828 0.7819 4120
weighted avg 0.7844 0.7835 0.7838 4120
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BenjaminOcampo/model-bert__trained-in-dynahate__seed-1
|
BenjaminOcampo
| 2023-06-18T22:22:16Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T22:20:27Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-bert__trained-in-dynahate__seed-1
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.8132 0.7993 0.8062 1933
1 0.8236 0.8362 0.8299 2167
accuracy 0.8188 4100
macro avg 0.8184 0.8177 0.8180 4100
weighted avg 0.8187 0.8188 0.8187 4100
```
**Classification results test set**
```
precision recall f1-score support
0 0.7585 0.7597 0.7591 1852
1 0.8035 0.8025 0.8030 2268
accuracy 0.7833 4120
macro avg 0.7810 0.7811 0.7811 4120
weighted avg 0.7833 0.7833 0.7833 4120
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BenjaminOcampo/model-bert__trained-in-dynahate__seed-3
|
BenjaminOcampo
| 2023-06-18T22:20:24Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T22:19:15Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-bert__trained-in-dynahate__seed-3
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.8073 0.8236 0.8154 1933
1 0.8398 0.8246 0.8321 2167
accuracy 0.8241 4100
macro avg 0.8235 0.8241 0.8237 4100
weighted avg 0.8245 0.8241 0.8242 4100
```
**Classification results test set**
```
precision recall f1-score support
0 0.7539 0.7759 0.7648 1852
1 0.8126 0.7932 0.8028 2268
accuracy 0.7854 4120
macro avg 0.7832 0.7846 0.7838 4120
weighted avg 0.7862 0.7854 0.7857 4120
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sakharamg/NMTKD
|
sakharamg
| 2023-06-18T22:15:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-18T20:57:36Z |
# Knowledge_Distillation
# Knowledge_Distillation
# Knowledge_Distillation
|
BenjaminOcampo/model-bert__trained-in-ihc__seed-0
|
BenjaminOcampo
| 2023-06-18T22:14:16Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T22:12:52Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-bert__trained-in-ihc__seed-0
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.8253 0.8333 0.8293 2658
1 0.7252 0.7137 0.7194 1638
accuracy 0.7877 4296
macro avg 0.7752 0.7735 0.7743 4296
weighted avg 0.7871 0.7877 0.7874 4296
```
**Classification results test set**
```
precision recall f1-score support
0 0.8245 0.8217 0.8231 2658
1 0.7122 0.7161 0.7142 1638
accuracy 0.7814 4296
macro avg 0.7683 0.7689 0.7686 4296
weighted avg 0.7817 0.7814 0.7815 4296
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BenjaminOcampo/model-bert__trained-in-ishate__seed-42
|
BenjaminOcampo
| 2023-06-18T22:09:01Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T22:08:08Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-bert__trained-in-ishate__seed-42
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.9043 0.8851 0.8946 2680
1 0.8234 0.8512 0.8371 1687
accuracy 0.8720 4367
macro avg 0.8639 0.8681 0.8658 4367
weighted avg 0.8731 0.8720 0.8724 4367
```
**Classification results test set**
```
precision recall f1-score support
0 0.9064 0.8851 0.8956 2681
1 0.8240 0.8548 0.8391 1687
accuracy 0.8734 4368
macro avg 0.8652 0.8699 0.8674 4368
weighted avg 0.8746 0.8734 0.8738 4368
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rttl-ai/foody-bert
|
rttl-ai
| 2023-06-18T22:07:52Z | 105 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2012.15349",
"arxiv:1910.09700",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-11T22:55:55Z |
---
license: bigscience-bloom-rail-1.0
---
# Model Card for Foody Bert
# Model Details
## Model Description
Foody-bert results from the second round of fine-tuning on the text classification task.
Continuation of fine-tuning of [senty-bert](https://huggingface.co/rttl-ai/senty-bert), which is fine-tuned on yelp reviews and Stanford sentiment treebank with ternary labels (neutral, positive, negative).
- **Language(s) (NLP):** English
- **License:** bigscience-bloom-rail-1.0
- **Related Models:** More information needed
- **Parent Model:** More information needed
- **Resources for more information:**
- [Associated Paper](https://arxiv.org/abs/2012.15349)
# Uses
## Direct Use
- The primary intended use is in sentiment analysis of the texts of product and service reviews, and this is the domain in which the model has been evaluated to date.
- We urge caution about using these models for sentiment prediction in other domains. For example, sentiment expression in medical contexts and professional evaluations can be different from sentiment expression in product/service reviews.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
- We recommend careful study of how these models behave, even when they are used in the domain on which they were trained and assessed. The models are deep learning models about which it is challenging to gain full analytic command; two examples that appear synonymous to human readers can receive very different predictions from these models, in ways that are hard to anticipate or explain, and so it is crucial to do continual qualitative and quantitative evaluation as part of any deployment.
- We advise even more caution when using these models in new domains, as sentiment expression can shift in subtle (and not-so-subtle) ways across different domains, and this could lead specific phenomena to be mis-handled in ways that could have dramatic and pernicious consequences.
# Training Details
## Training Data
The model was trained on product/service reviews from Yelp, reviews from Amazon, reviews from IMDB (as defined by [this dataset](https://ai.stanford.edu/~amaas/data/sentiment/)), sentences from Rotten Tomatoes (as given by the [Stanford Sentiment Treebank](https://nlp.stanford.edu/sentiment/)), the [Customer Reviews](https://www.cs.uic.edu/~liub/FBS/sentiment-analysis.html) dataset, and on subsets of the DynaSent dataset. The dataset mainly contains restaurant review data.
For extensive details on these datasets are included in the [associated Paper](https://arxiv.org/abs/2012.15349).
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
**BibTeX:**
More information needed
**APA:**
```
@article{potts-etal-2020-dynasent,
title={{DynaSent}: A Dynamic Benchmark for Sentiment Analysis},
author={Potts, Christopher and Wu, Zhengxuan and Geiger, Atticus and Kiela, Douwe},
journal={arXiv preprint arXiv:2012.15349},
url={https://arxiv.org/abs/2012.15349},
year={2020}}
```
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("rttl-ai/foody-bert")
model = AutoModelForSequenceClassification.from_pretrained("rttl-ai/foody-bert")
```
</details>
|
BenjaminOcampo/model-bert__trained-in-sbic__seed-2
|
BenjaminOcampo
| 2023-06-18T22:03:14Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T22:02:02Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-bert__trained-in-sbic__seed-2
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.8764 0.8467 0.8613 8756
1 0.8330 0.8649 0.8487 7741
accuracy 0.8552 16497
macro avg 0.8547 0.8558 0.8550 16497
weighted avg 0.8560 0.8552 0.8554 16497
```
**Classification results test set**
```
precision recall f1-score support
0 0.8676 0.8507 0.8590 8471
1 0.8589 0.8750 0.8668 8798
accuracy 0.8630 17269
macro avg 0.8632 0.8628 0.8629 17269
weighted avg 0.8631 0.8630 0.8630 17269
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BenjaminOcampo/model-bert__trained-in-sbic__seed-3
|
BenjaminOcampo
| 2023-06-18T22:00:28Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T21:59:19Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-bert__trained-in-sbic__seed-3
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.8753 0.8472 0.8610 8756
1 0.8332 0.8635 0.8481 7741
accuracy 0.8548 16497
macro avg 0.8542 0.8553 0.8545 16497
weighted avg 0.8555 0.8548 0.8549 16497
```
**Classification results test set**
```
precision recall f1-score support
0 0.8731 0.8567 0.8648 8471
1 0.8645 0.8801 0.8722 8798
accuracy 0.8686 17269
macro avg 0.8688 0.8684 0.8685 17269
weighted avg 0.8687 0.8686 0.8686 17269
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BenjaminOcampo/model-bert__trained-in-toxigen__seed-3
|
BenjaminOcampo
| 2023-06-18T21:59:10Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T21:57:55Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-bert__trained-in-toxigen__seed-3
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.8333 0.8778 0.8550 900
1 0.8697 0.8229 0.8456 892
accuracy 0.8504 1792
macro avg 0.8515 0.8503 0.8503 1792
weighted avg 0.8514 0.8504 0.8503 1792
```
**Classification results test set**
```
precision recall f1-score support
0 0.8775 0.7323 0.7984 538
1 0.6849 0.8505 0.7588 368
accuracy 0.7804 906
macro avg 0.7812 0.7914 0.7786 906
weighted avg 0.7993 0.7804 0.7823 906
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BenjaminOcampo/model-bert__trained-in-toxigen__seed-42
|
BenjaminOcampo
| 2023-06-18T21:53:46Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T21:52:28Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-bert__trained-in-toxigen__seed-42
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.8346 0.8856 0.8593 900
1 0.8769 0.8229 0.8490 892
accuracy 0.8544 1792
macro avg 0.8557 0.8542 0.8542 1792
weighted avg 0.8557 0.8544 0.8542 1792
```
**Classification results test set**
```
precision recall f1-score support
0 0.8810 0.7565 0.8140 538
1 0.7050 0.8505 0.7709 368
accuracy 0.7947 906
macro avg 0.7930 0.8035 0.7925 906
weighted avg 0.8095 0.7947 0.7965 906
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BenjaminOcampo/model-hatebert__trained-in-dynahate__seed-3
|
BenjaminOcampo
| 2023-06-18T21:47:55Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T21:46:29Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-hatebert__trained-in-dynahate__seed-3
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.8188 0.8231 0.8209 1933
1 0.8414 0.8376 0.8395 2167
accuracy 0.8307 4100
macro avg 0.8301 0.8303 0.8302 4100
weighted avg 0.8308 0.8307 0.8308 4100
```
**Classification results test set**
```
precision recall f1-score support
0 0.7682 0.7694 0.7688 1852
1 0.8115 0.8104 0.8109 2268
accuracy 0.7920 4120
macro avg 0.7898 0.7899 0.7899 4120
weighted avg 0.7920 0.7920 0.7920 4120
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mmirmahdi/dqn-SpaceInvadersNoFrameskip-v4
|
mmirmahdi
| 2023-06-18T21:47:10Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-18T21:46:21Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 665.50 +/- 264.45
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mmirmahdi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mmirmahdi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mmirmahdi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
BenjaminOcampo/model-hatebert__trained-in-dynahate__seed-2
|
BenjaminOcampo
| 2023-06-18T21:46:19Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T21:44:40Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-hatebert__trained-in-dynahate__seed-2
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.8233 0.8314 0.8273 1933
1 0.8482 0.8408 0.8445 2167
accuracy 0.8363 4100
macro avg 0.8357 0.8361 0.8359 4100
weighted avg 0.8365 0.8363 0.8364 4100
```
**Classification results test set**
```
precision recall f1-score support
0 0.7742 0.7700 0.7721 1852
1 0.8130 0.8166 0.8148 2268
accuracy 0.7956 4120
macro avg 0.7936 0.7933 0.7934 4120
weighted avg 0.7955 0.7956 0.7956 4120
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BenjaminOcampo/model-hatebert__trained-in-ihc__seed-42
|
BenjaminOcampo
| 2023-06-18T21:44:31Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T21:42:58Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-hatebert__trained-in-ihc__seed-42
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.8222 0.8123 0.8172 2658
1 0.7012 0.7149 0.7080 1638
accuracy 0.7751 4296
macro avg 0.7617 0.7636 0.7626 4296
weighted avg 0.7760 0.7751 0.7755 4296
```
**Classification results test set**
```
precision recall f1-score support
0 0.8232 0.8093 0.8162 2658
1 0.6988 0.7179 0.7082 1638
accuracy 0.7744 4296
macro avg 0.7610 0.7636 0.7622 4296
weighted avg 0.7757 0.7744 0.7750 4296
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BenjaminOcampo/model-hatebert__trained-in-ihc__seed-2
|
BenjaminOcampo
| 2023-06-18T21:42:50Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T21:37:05Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-hatebert__trained-in-ihc__seed-2
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.8187 0.8292 0.8239 2658
1 0.7170 0.7021 0.7094 1638
accuracy 0.7807 4296
macro avg 0.7678 0.7656 0.7667 4296
weighted avg 0.7799 0.7807 0.7803 4296
```
**Classification results test set**
```
precision recall f1-score support
0 0.8150 0.8322 0.8235 2658
1 0.7181 0.6935 0.7056 1638
accuracy 0.7793 4296
macro avg 0.7666 0.7629 0.7646 4296
weighted avg 0.7781 0.7793 0.7786 4296
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BenjaminOcampo/model-hatebert__trained-in-ihc__seed-3
|
BenjaminOcampo
| 2023-06-18T21:40:58Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T21:35:28Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-hatebert__trained-in-ihc__seed-3
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.8175 0.8424 0.8297 2658
1 0.7309 0.6947 0.7124 1638
accuracy 0.7861 4296
macro avg 0.7742 0.7686 0.7710 4296
weighted avg 0.7844 0.7861 0.7850 4296
```
**Classification results test set**
```
precision recall f1-score support
0 0.8141 0.8337 0.8238 2658
1 0.7192 0.6911 0.7049 1638
accuracy 0.7793 4296
macro avg 0.7666 0.7624 0.7643 4296
weighted avg 0.7779 0.7793 0.7784 4296
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BenjaminOcampo/model-hatebert__trained-in-ihc__seed-1
|
BenjaminOcampo
| 2023-06-18T21:35:20Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T21:34:16Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-hatebert__trained-in-ihc__seed-1
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.8098 0.8330 0.8212 2658
1 0.7157 0.6825 0.6988 1638
accuracy 0.7756 4296
macro avg 0.7628 0.7577 0.7600 4296
weighted avg 0.7739 0.7756 0.7745 4296
```
**Classification results test set**
```
precision recall f1-score support
0 0.8147 0.8318 0.8232 2658
1 0.7174 0.6929 0.7050 1638
accuracy 0.7789 4296
macro avg 0.7661 0.7624 0.7641 4296
weighted avg 0.7776 0.7789 0.7781 4296
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BenjaminOcampo/model-hatebert__trained-in-ishate__seed-1
|
BenjaminOcampo
| 2023-06-18T21:31:00Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T21:29:35Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-hatebert__trained-in-ishate__seed-1
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.8964 0.8843 0.8903 2680
1 0.8201 0.8376 0.8287 1687
accuracy 0.8663 4367
macro avg 0.8582 0.8610 0.8595 4367
weighted avg 0.8669 0.8663 0.8665 4367
```
**Classification results test set**
```
precision recall f1-score support
0 0.9008 0.8836 0.8921 2681
1 0.8205 0.8453 0.8327 1687
accuracy 0.8688 4368
macro avg 0.8606 0.8645 0.8624 4368
weighted avg 0.8698 0.8688 0.8692 4368
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
anavarrete/bias-mitigation-t2-ceo
|
anavarrete
| 2023-06-18T21:29:05Z | 31 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-18T21:22:38Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### bias_mitigation_t2_CEO Dreambooth model trained by anavarrete with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
BenjaminOcampo/model-hatebert__trained-in-ishate__seed-3
|
BenjaminOcampo
| 2023-06-18T21:28:00Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T21:26:31Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-hatebert__trained-in-ishate__seed-3
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.8994 0.8772 0.8882 2680
1 0.8123 0.8441 0.8279 1687
accuracy 0.8644 4367
macro avg 0.8559 0.8607 0.8580 4367
weighted avg 0.8658 0.8644 0.8649 4367
```
**Classification results test set**
```
precision recall f1-score support
0 0.9108 0.8870 0.8987 2681
1 0.8275 0.8619 0.8444 1687
accuracy 0.8773 4368
macro avg 0.8692 0.8744 0.8715 4368
weighted avg 0.8786 0.8773 0.8777 4368
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
edu-linguistic/deberta-v3-large-edu-rm
|
edu-linguistic
| 2023-06-18T21:24:40Z | 0 | 0 | null |
[
"en",
"dataset:Dahoas/rm-static",
"dataset:openai/webgpt_comparisons",
"region:us"
] | null | 2023-06-17T09:15:50Z |
---
datasets:
- Dahoas/rm-static
- openai/webgpt_comparisons
language:
- en
---
## Inference Example:
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "edu-linguistic/deberta-v3-large-edu-rm"
model_name = 'microsoft/deberta-v3-large'
config = PeftConfig.from_pretrained(peft_model_id)
model_config = AutoConfig.from_pretrained(model_name, cache_dir=self.model_cache_dir)
model_config.num_labels = 1
model = AutoModelForSequenceClassification.from_pretrained(model_name)
model = PeftModelForSequenceClassification.from_pretrained(model, peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(model_name)
texts = "<|prompter|> When using linear regression, how do you help prevent numerical instabilities? (One or multiple answers) \n <|assistant|> 4. add more features"
inputs = tokenizer(texts, return_tensors='pt', padding=True, truncation=True)
score = self.reward_model(**inputs).logits.cpu().detach()
print(score)
```
|
BenjaminOcampo/model-hatebert__trained-in-sbic__seed-42
|
BenjaminOcampo
| 2023-06-18T21:23:33Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T21:22:22Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-hatebert__trained-in-sbic__seed-42
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.8295 0.7823 0.8052 5736
1 0.8874 0.9143 0.9006 10761
accuracy 0.8684 16497
macro avg 0.8584 0.8483 0.8529 16497
weighted avg 0.8673 0.8684 0.8675 16497
```
**Classification results test set**
```
precision recall f1-score support
0 0.8207 0.7773 0.7984 5500
1 0.8984 0.9206 0.9094 11769
accuracy 0.8750 17269
macro avg 0.8596 0.8490 0.8539 17269
weighted avg 0.8737 0.8750 0.8740 17269
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BenjaminOcampo/model-hatebert__trained-in-toxigen__seed-3
|
BenjaminOcampo
| 2023-06-18T21:18:04Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T21:16:59Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-hatebert__trained-in-toxigen__seed-3
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.8559 0.8844 0.8699 900
1 0.8794 0.8498 0.8643 892
accuracy 0.8672 1792
macro avg 0.8676 0.8671 0.8671 1792
weighted avg 0.8676 0.8672 0.8671 1792
```
**Classification results test set**
```
precision recall f1-score support
0 0.9184 0.6691 0.7742 538
1 0.6537 0.9130 0.7619 368
accuracy 0.7682 906
macro avg 0.7860 0.7911 0.7680 906
weighted avg 0.8109 0.7682 0.7692 906
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BenjaminOcampo/model-hatebert__trained-in-toxigen__seed-0
|
BenjaminOcampo
| 2023-06-18T21:16:49Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-18T21:15:31Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-hatebert__trained-in-toxigen__seed-0
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Classification results dev set**
```
precision recall f1-score support
0 0.8601 0.8678 0.8639 900
1 0.8654 0.8576 0.8615 892
accuracy 0.8627 1792
macro avg 0.8628 0.8627 0.8627 1792
weighted avg 0.8627 0.8627 0.8627 1792
```
**Classification results test set**
```
precision recall f1-score support
0 0.9160 0.6691 0.7734 538
1 0.6530 0.9103 0.7605 368
accuracy 0.7671 906
macro avg 0.7845 0.7897 0.7669 906
weighted avg 0.8092 0.7671 0.7681 906
```
- **Developed by:** Benjamin Ocampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
VladmirPutgang/ppo-Huggy
|
VladmirPutgang
| 2023-06-18T20:56:20Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-18T20:56:02Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: VladmirPutgang/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
magnustragardh/ppo-Huggy
|
magnustragardh
| 2023-06-18T20:55:30Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-18T20:55:05Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: magnustragardh/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Todmy/Taxi_q_learning
|
Todmy
| 2023-06-18T20:42:14Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-18T20:27:49Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi_q_learning
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.77
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Todmy/Taxi_q_learning", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Lzhou286/q-FrozenLake-v1-4x4-noSlippery
|
Lzhou286
| 2023-06-18T20:20:42Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-18T20:20:29Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Lzhou286/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
IvanKun/Reinforce-PixelCopter-1
|
IvanKun
| 2023-06-18T19:51:55Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-18T19:51:42Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 9.40 +/- 6.73
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
emiliskra/donut-base-sroie
|
emiliskra
| 2023-06-18T19:42:39Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-06-18T18:32:30Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
nmitchko/medfalcon-40b-lora
|
nmitchko
| 2023-06-18T19:22:01Z | 9 | 3 |
peft
|
[
"peft",
"medical",
"text-generation",
"en",
"arxiv:2106.09685",
"license:cc-by-nc-3.0",
"region:us"
] |
text-generation
| 2023-06-14T16:19:36Z |
---
language:
- en
library_name: peft
pipeline_tag: text-generation
tags:
- medical
license: cc-by-nc-3.0
---
# MedFalcon 40b LoRA
## Model Description
### Architecture
`nmitchko/medfalcon-40b-lora` is a large language model LoRa specifically fine-tuned for medical domain tasks.
It is based on [`Falcon-40b-instruct`](https://huggingface.co/tiiuae/falcon-40b-instruct/) at 40 billion parameters.
The primary goal of this model is to improve question-answering and medical dialogue tasks.
It was trained using [LoRA](https://arxiv.org/abs/2106.09685), specifically [QLora](https://github.com/artidoro/qlora), to reduce memory footprint.
> This Lora supports 4-bit and 8-bit modes.
### Requirements
```
bitsandbytes>=0.39.0
peft
transformers
```
Steps to load this model:
1. Load base model using QLORA
2. Apply LoRA using peft
```python
#
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-40b-instruct"
LoRA = "nmitchko/medfalcon-40b-lora"
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForCausalLM.from_pretrained(model,
load_in_8bit=load_8bit,
torch_dtype=torch.float16,
trust_remote_code=True,
)
model = PeftModel.from_pretrained(model, LoRA)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"What does the drug ceftrioxone do?\nDoctor:",
max_length=200,
do_sample=True,
top_k=40,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
|
LarryAIDraw/NarmayaLoHa
|
LarryAIDraw
| 2023-06-18T19:21:53Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-18T18:57:31Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/91428/character-narmaya-multivercostume-granblue-fantasy-loraloconloha
|
LarryAIDraw/kanna
|
LarryAIDraw
| 2023-06-18T19:20:34Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-18T18:55:35Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/92475/ogata-kanna-blue-archive-or-goofy-ai
|
LarryAIDraw/AngelBeats_Yuriv2-04
|
LarryAIDraw
| 2023-06-18T19:19:45Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-18T18:55:04Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/91800/angel-beats-yuri-nakamura-yurippe
|
LarryAIDraw/carenhortensiaamor
|
LarryAIDraw
| 2023-06-18T19:19:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-18T18:54:26Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/91301/caren-hortensia-amor-or-fate-grand-order
|
LarryAIDraw/MiyakoSaitouV1
|
LarryAIDraw
| 2023-06-18T19:18:56Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-18T18:53:55Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/41207/miyako-saitou-or-oshi-no-ko
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.