modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-28 12:29:09
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-28 12:26:21
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
JaimeAi/distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
|
JaimeAi
| 2023-06-28T20:41:19Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T19:53:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0298
- F1: 0.5415
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.098 | 1.0 | 766 | 1.0509 | 0.5115 |
| 0.9349 | 2.0 | 1532 | 1.0298 | 0.5415 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
cleanrl/Humanoid-v4-ddpg_continuous_action-seed1
|
cleanrl
| 2023-06-28T20:26:22Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Humanoid-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T20:26:15Z |
---
tags:
- Humanoid-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DDPG
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Humanoid-v4
type: Humanoid-v4
metrics:
- type: mean_reward
value: 1015.40 +/- 612.19
name: mean_reward
verified: false
---
# (CleanRL) **DDPG** Agent Playing **Humanoid-v4**
This is a trained model of a DDPG agent playing Humanoid-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ddpg_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action --env-id Humanoid-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Humanoid-v4-ddpg_continuous_action-seed1/raw/main/ddpg_continuous_action.py
curl -OL https://huggingface.co/cleanrl/Humanoid-v4-ddpg_continuous_action-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Humanoid-v4-ddpg_continuous_action-seed1/raw/main/poetry.lock
poetry install --all-extras
python ddpg_continuous_action.py --track --capture-video --save-model --hf-entity cleanrl --upload-model --env-id Humanoid-v4 --seed 1
```
# Hyperparameters
```python
{'batch_size': 256,
'buffer_size': 1000000,
'capture_video': True,
'cuda': True,
'env_id': 'Humanoid-v4',
'exp_name': 'ddpg_continuous_action',
'exploration_noise': 0.1,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.0003,
'learning_starts': 25000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'save_model': True,
'seed': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
joncrain/test
|
joncrain
| 2023-06-28T20:18:56Z | 0 | 0 | null |
[
"dataset:fka/awesome-chatgpt-prompts",
"region:us"
] | null | 2023-06-28T20:12:23Z |
---
datasets:
- fka/awesome-chatgpt-prompts
---
|
mmirmahdi/Reinforce-CartPole-v1
|
mmirmahdi
| 2023-06-28T20:14:00Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T20:13:51Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
finding-fossils/metaextractor
|
finding-fossils
| 2023-06-28T20:05:50Z | 112 | 3 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"Beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-15T16:24:37Z |
---
tags:
- Beta
license: mit
thumbnail: >-
https://huggingface.co/finding-fossils/metaextractor/resolve/main/ffossils-logo-text.png
widget:
- text: The core sample was aged at 12300 - 13500 BP and found at 210m a.s.l.
example_title: Age/Alti
- text: In Northern Canada, the BGC site core was primarily made up of Pinus pollen.
example_title: Taxa/Site/Region
metrics:
- precision
- recall
---
<img src="https://huggingface.co/finding-fossils/metaextractor/resolve/main/ffossils-logo-text.png" width="400">
# MetaExtractor
<!-- Provide a quick summary of what the model is/does. -->
This model extracts metadata from research articles related to Paleoecology.
The entities detected by this model are:
- **AGE**: when historical ages are mentioned such as 1234 AD or 4567 BP (before present)
- **TAXA**: plant or animal taxa names indicating what samples contained
- **GEOG**: geographic coordinates indicating where samples were excavated from, e.g. 12'34"N 34'23"W
- **SITE**: site names for where samples were excavated from
- **REGION**: more general regions to provide context for where sites are located
- **EMAIL**: researcher emails in the articles able to be used for follow-up contact
- **ALTI**: altitudes of sites from where samples were excavated, e.g. 123 m a.s.l (above sea level)
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Ty Andrews, Jenit Jain, Shaun Hutchinson, Kelly Wu, and Simon Goring
- **Shared by:** Neotoma Paleocology Database
- **Model type:** Token Classification
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** roberta-base
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/NeotomaDB/MetaExtractor
- **Paper:** TBD
- **Demo:** TBD
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
This model can be used to extract entities from any text that are Paeleoecology related or tangential. Potential uses include identifying unique SITE names in research papers in other domains.
### Direct Use
This model is deployed on the xDD (formerly GeoDeepDive) servers where it is getting fed new research articles relevant to Neotoma and returning the extracted data.
This approach could be adapted to other domains by using the training and development code found [github.com/NeotomaDB/MetaExtractor](https://github.com/NeotomaDB/MetaExtractor) to run similar data extraction for other research domains.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model was trained entirely on English research articles and will likely not perform well on research in other languages. Also, the articles used to train the model were chosen based on being already present in the Neotoma database and therefore may have selection bias as they represent what is already known to be relevant to Neotoma and may not correctly manage new, previously missed articles.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("finding-fossils/metaextractor")
model = AutoModelForTokenClassification.from_pretrained("finding-fossils/metaextractor")
ner_pipe = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
ner_pipe("In Northern Canada, the BGC site core was primarily made up of Pinus pollen.")
# Output
[
{
"entity_group": "REGION",
"score": 0.8088379502296448,
"word": " Northern Canada,",
"start": 3,
"end": 19
},
{
"entity_group": "SITE",
"score": 0.8307041525840759,
"word": " BGC",
"start": 24,
"end": 27
},
{
"entity_group": "TAXA",
"score": 0.9806344509124756,
"word": " Pinus",
"start": 63,
"end": 68
}
]
```
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The model was trained using a set of 39 research articles deemed relevant to the Neotoma Database. All articles were written in English. The entities were labeled by the project team along with using pre-labelling with early models to speed up the labelling process.
A 70/15/15 train/val/test split was used which had the following breakdown of words and entities.
| | Train | Validation | Test|
|---|:---:|:---:|:---:|
|Articles| 28 | 6 | 6|
| Words | 220857 | 37809 | 36098 |
|TAXA Entities | 3352 | 650 | 570 |
|SITE Entities | 1228 | 177 | 219 |
| REGION Entities | 2314 | 318 | 258 |
|GEOG Entities | 188 | 37 | 8 |
|AGE Entities | 919 | 206 | 153 |
|ALTI Entities | 99 | 24 | 14 |
| Email Entities | 14 | 4 | 11 |
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
For full training details please see the GitHub repository and Wiki: [github.com/NeotomaDB/MetaExtractor](https://github.com/NeotomaDB/MetaExtractor)
## Results & Metrics
For full model results see the report here: [Final Project Report](https://github.com/NeotomaDB/MetaExtractor/blob/main/reports/final/finding-fossils-final.pdf)
|
ahishamm/vit-huge-isic-patch-14
|
ahishamm
| 2023-06-28T20:05:46Z | 198 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-28T19:59:05Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-huge-isic-patch-14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-huge-isic-patch-14
This model is a fine-tuned version of [google/vit-huge-patch14-224-in21k](https://huggingface.co/google/vit-huge-patch14-224-in21k) on the ahishamm/isic_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6077
- Accuracy: 0.7917
- Recall: 0.7917
- F1: 0.7917
- Precision: 0.7917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
globuslabs/ScholarBERT-XL_1
|
globuslabs
| 2023-06-28T20:01:30Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"science",
"multi-displinary",
"en",
"arxiv:2205.11342",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-22T22:32:14Z |
---
language: en
tags:
- science
- multi-displinary
license: apache-2.0
---
# ScholarBERT-XL_1 Model
This is the **ScholarBERT-XL_1** variant of the ScholarBERT model family.
The model is pretrained on a large collection of scientific research articles (**2.2B tokens**).
This is a **cased** (case-sensitive) model. The tokenizer will not convert all inputs to lower-case by default.
The model has a total of 770M parameters.
# Model Architecture
| Hyperparameter | Value |
|-----------------|:-------:|
| Layers | 36 |
| Hidden Size | 1280 |
| Attention Heads | 20 |
| Total Parameters | 770M |
# Training Dataset
The vocab and the model are pertrained on **1% of the PRD** scientific literature dataset.
The PRD dataset is provided by Public.Resource.Org, Inc. (“Public Resource”),
a nonprofit organization based in California. This dataset was constructed from a corpus
of journal article files, from which We successfully extracted text from 75,496,055 articles from 178,928 journals.
The articles span across Arts & Humanities, Life Sciences & Biomedicine, Physical Sciences,
Social Sciences, and Technology. The distribution of articles is shown below.

# BibTeX entry and citation info
If using this model, please cite this paper:
```
@misc{hong2023diminishing,
title={The Diminishing Returns of Masked Language Models to Science},
author={Zhi Hong and Aswathy Ajith and Gregory Pauloski and Eamon Duede and Kyle Chard and Ian Foster},
year={2023},
eprint={2205.11342},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ahishamm/vit-large-isic-patch-32
|
ahishamm
| 2023-06-28T19:58:41Z | 197 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-28T19:53:00Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-large-isic-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-isic-patch-32
This model is a fine-tuned version of [google/vit-large-patch32-224-in21k](https://huggingface.co/google/vit-large-patch32-224-in21k) on the ahishamm/isic_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6200
- Accuracy: 0.7639
- Recall: 0.7639
- F1: 0.7639
- Precision: 0.7639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
rttl-ai/bert-base-uncased-yelp-polarity
|
rttl-ai
| 2023-06-28T19:58:10Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"dataset:yelp_polarity",
"arxiv:2004.10964",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T14:16:21Z |
---
license: apache-2.0
datasets:
- yelp_polarity
language:
- en
model-index:
- name: rttl-ai/bert-base-uncased-polarity
results:
- task:
type: text classification
name: binary_classification
dataset:
type: yelp_polarity
name: yelp_polarity
config: default
split: test
metrics:
- type: accuracy
value: 0.98
name: Accuracy
---
## Model Details
**Model Description:** This model is a fine-tune checkpoint of [bert-base-uncased](https://huggingface.co/bert-base-uncased), fine-tuned on yelp reviews.
- **Developed by:** rttl-ai
- **Model Type:** Text Classification
- **Language(s):** English
- **License:** Apache-2.0
- **Resources for more information:**
- The model was pre-trained with task-adaptive pre-training [TAPT](https://arxiv.org/pdf/2004.10964.pdf) with an increased masking rate, no corruption strategy, and using WWM, following [this paper](https://aclanthology.org/2023.eacl-main.217.pdf)
- pre-trained on yelp reviews
- fine-tuned on yelp reviews for binary class text classification
|
YakovElm/MariaDB_5_BERT_Over_Sampling
|
YakovElm
| 2023-06-28T19:55:08Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T19:54:31Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB_5_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB_5_BERT_Over_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0696
- Train Accuracy: 0.9771
- Validation Loss: 0.4323
- Validation Accuracy: 0.9221
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4972 | 0.7607 | 0.2633 | 0.9121 | 0 |
| 0.1913 | 0.9294 | 0.3861 | 0.9121 | 1 |
| 0.0696 | 0.9771 | 0.4323 | 0.9221 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ahishamm/vit-large-isic-patch-16
|
ahishamm
| 2023-06-28T19:52:45Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-28T19:47:01Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-large-isic-patch-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-isic-patch-16
This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the ahishamm/isic_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7317
- Accuracy: 0.75
- Recall: 0.75
- F1: 0.75
- Precision: 0.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
shirsh10mall/Helsinki-shirsh-finetuned-translation-english-to-hindi
|
shirsh10mall
| 2023-06-28T19:52:16Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-27T03:08:56Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: shirsh10mall/Helsinki-shirsh-finetuned-translation-english-to-hindi
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# shirsh10mall/Helsinki-shirsh-finetuned-translation-english-to-hindi
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-hi](https://huggingface.co/Helsinki-NLP/opus-mt-en-hi) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.5746
- Validation Loss: 4.4287
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 10, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.5743 | 4.4287 | 0 |
| 4.5751 | 4.4287 | 1 |
| 4.5730 | 4.4287 | 2 |
| 4.5752 | 4.4287 | 3 |
| 4.5757 | 4.4287 | 4 |
| 4.5753 | 4.4287 | 5 |
| 4.5729 | 4.4287 | 6 |
| 4.5759 | 4.4287 | 7 |
| 4.5749 | 4.4287 | 8 |
| 4.5746 | 4.4287 | 9 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
bk6000/q-Taxi-v3
|
bk6000
| 2023-06-28T19:48:43Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T19:48:41Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="bk6000/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
alitair/ppo-LunarLander-v2
|
alitair
| 2023-06-28T19:48:11Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T19:47:50Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.02 +/- 10.27
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Jumartineze/xlm-roberta-base-finetuned-MeIA-AnalisisDeSentimientos
|
Jumartineze
| 2023-06-28T19:45:04Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T17:15:13Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9935
- F1: 0.5903
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.005 | 1.0 | 766 | 0.9719 | 0.5623 |
| 0.8756 | 2.0 | 1532 | 0.9842 | 0.5655 |
| 0.753 | 3.0 | 2298 | 0.9935 | 0.5903 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
S3S3/ppo-Pyramids_Training1
|
S3S3
| 2023-06-28T19:42:01Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-28T19:41:53Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: S3S3/ppo-Pyramids_Training1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
PickleYard/PerfectWorld
|
PickleYard
| 2023-06-28T19:38:43Z | 7 | 0 |
diffusers
|
[
"diffusers",
"ai-art",
"style-transfer",
"animation",
"deep-learning",
"text-to-image",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-28T18:41:57Z |
---
license: other
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- ai-art
- style-transfer
- animation
- deep-learning
- text-to-image
---
|
bk6000/q-FrozenLake-v1-4x4-noSlippery
|
bk6000
| 2023-06-28T19:27:47Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T19:27:45Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="bk6000/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
richardr1126/guanaco-13b-merged
|
richardr1126
| 2023-06-28T19:23:40Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"guanaco",
"13b",
"dataset:timdettmers/openassistant-guanaco",
"arxiv:2305.14314",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-15T01:04:14Z |
---
datasets:
- timdettmers/openassistant-guanaco
tags:
- llama
- guanaco
- 13b
---
This model was created by merging [timdettmers/guanaco-13b](https://huggingface.co/timdettmers/guanaco-13b) QLoRa adapter with the base LLaMA-13b model.
## Citation
```bibtex
@article{dettmers2023qlora,
title={QLoRA: Efficient Finetuning of Quantized LLMs},
author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:2305.14314},
year={2023}
}
```
|
magnustragardh/Reinforce-Pixelcopter-PLE-v0
|
magnustragardh
| 2023-06-28T19:22:59Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T19:22:55Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 49.30 +/- 29.27
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours chekc Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
vlkn/falcon_finetuned3
|
vlkn
| 2023-06-28T19:15:44Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2023-06-28T18:43:45Z |
---
tags:
- generated_from_trainer
model-index:
- name: falcon_finetuned3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon_finetuned3
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 60
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
cleanrl/Pusher-v4-ddpg_continuous_action_jax-seed1
|
cleanrl
| 2023-06-28T19:12:37Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Pusher-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T19:12:22Z |
---
tags:
- Pusher-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DDPG
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pusher-v4
type: Pusher-v4
metrics:
- type: mean_reward
value: -28.99 +/- 2.80
name: mean_reward
verified: false
---
# (CleanRL) **DDPG** Agent Playing **Pusher-v4**
This is a trained model of a DDPG agent playing Pusher-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action_jax.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ddpg_continuous_action_jax]"
python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action_jax --env-id Pusher-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Pusher-v4-ddpg_continuous_action_jax-seed1/raw/main/ddpg_continuous_action_jax.py
curl -OL https://huggingface.co/cleanrl/Pusher-v4-ddpg_continuous_action_jax-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Pusher-v4-ddpg_continuous_action_jax-seed1/raw/main/poetry.lock
poetry install --all-extras
python ddpg_continuous_action_jax.py --track --capture-video --save-model --hf-entity cleanrl --upload-mode --env-id Pusher-v4 --seed 1
```
# Hyperparameters
```python
{'batch_size': 256,
'buffer_size': 1000000,
'capture_video': True,
'env_id': 'Pusher-v4',
'exp_name': 'ddpg_continuous_action_jax',
'exploration_noise': 0.1,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.0003,
'learning_starts': 25000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'save_model': True,
'seed': 1,
'tau': 0.005,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
YakovElm/Jira_20_BERT_Over_Sampling
|
YakovElm
| 2023-06-28T19:05:58Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T19:05:01Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira_20_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira_20_BERT_Over_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0521
- Train Accuracy: 0.9856
- Validation Loss: 0.4925
- Validation Accuracy: 0.8612
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4491 | 0.7989 | 0.6370 | 0.6656 | 0 |
| 0.1533 | 0.9514 | 0.3511 | 0.9211 | 1 |
| 0.0521 | 0.9856 | 0.4925 | 0.8612 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
cleanrl/Hopper-v4-ddpg_continuous_action-seed1
|
cleanrl
| 2023-06-28T18:59:37Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Hopper-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T18:59:25Z |
---
tags:
- Hopper-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DDPG
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v4
type: Hopper-v4
metrics:
- type: mean_reward
value: 1675.53 +/- 1038.51
name: mean_reward
verified: false
---
# (CleanRL) **DDPG** Agent Playing **Hopper-v4**
This is a trained model of a DDPG agent playing Hopper-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ddpg_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action --env-id Hopper-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Hopper-v4-ddpg_continuous_action-seed1/raw/main/ddpg_continuous_action.py
curl -OL https://huggingface.co/cleanrl/Hopper-v4-ddpg_continuous_action-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Hopper-v4-ddpg_continuous_action-seed1/raw/main/poetry.lock
poetry install --all-extras
python ddpg_continuous_action.py --track --capture-video --save-model --hf-entity cleanrl --upload-model --env-id Hopper-v4 --seed 1
```
# Hyperparameters
```python
{'batch_size': 256,
'buffer_size': 1000000,
'capture_video': True,
'cuda': True,
'env_id': 'Hopper-v4',
'exp_name': 'ddpg_continuous_action',
'exploration_noise': 0.1,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.0003,
'learning_starts': 25000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'save_model': True,
'seed': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Humanoid-v4-ddpg_continuous_action_jax-seed1
|
cleanrl
| 2023-06-28T18:54:13Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Humanoid-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T18:53:59Z |
---
tags:
- Humanoid-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DDPG
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Humanoid-v4
type: Humanoid-v4
metrics:
- type: mean_reward
value: 1303.07 +/- 456.80
name: mean_reward
verified: false
---
# (CleanRL) **DDPG** Agent Playing **Humanoid-v4**
This is a trained model of a DDPG agent playing Humanoid-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action_jax.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ddpg_continuous_action_jax]"
python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action_jax --env-id Humanoid-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Humanoid-v4-ddpg_continuous_action_jax-seed1/raw/main/ddpg_continuous_action_jax.py
curl -OL https://huggingface.co/cleanrl/Humanoid-v4-ddpg_continuous_action_jax-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Humanoid-v4-ddpg_continuous_action_jax-seed1/raw/main/poetry.lock
poetry install --all-extras
python ddpg_continuous_action_jax.py --track --capture-video --save-model --hf-entity cleanrl --upload-mode --env-id Humanoid-v4 --seed 1
```
# Hyperparameters
```python
{'batch_size': 256,
'buffer_size': 1000000,
'capture_video': True,
'env_id': 'Humanoid-v4',
'exp_name': 'ddpg_continuous_action_jax',
'exploration_noise': 0.1,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.0003,
'learning_starts': 25000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'save_model': True,
'seed': 1,
'tau': 0.005,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
codervent981/admin-dashboard-template
|
codervent981
| 2023-06-28T18:39:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-28T18:39:01Z |
Codervent Admin Dashboard Template is a versatile and user-friendly web application interface designed specifically for administrators. With its clean and modern design, it offers a comprehensive set of features and tools to manage and monitor various aspects of an application or website. The template provides a responsive layout, making it accessible on different devices. It includes various widgets, charts, tables, and forms, allowing administrators to effectively analyze data, manage user accounts, track performance metrics, and perform administrative tasks efficiently. Codervent Admin Dashboard Template is a reliable solution for streamlining administrative workflows and enhancing productivity.
Read More:- https://codervent.com/
|
vlkn/falcon_finetuned2
|
vlkn
| 2023-06-28T18:38:23Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2023-06-28T17:27:34Z |
---
tags:
- generated_from_trainer
model-index:
- name: falcon_finetuned2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon_finetuned2
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 300
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hugo1499/bert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
|
hugo1499
| 2023-06-28T18:36:02Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T00:10:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1233
- Accuracy: 0.5208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1308 | 1.0 | 383 | 1.1263 | 0.4914 |
| 0.9642 | 2.0 | 766 | 1.0872 | 0.5150 |
| 0.813 | 3.0 | 1149 | 1.1233 | 0.5208 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
YakovElm/Jira_15_BERT_Over_Sampling
|
YakovElm
| 2023-06-28T18:26:07Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T18:25:15Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira_15_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira_15_BERT_Over_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1151
- Train Accuracy: 0.9662
- Validation Loss: 0.8861
- Validation Accuracy: 0.7066
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5468 | 0.7016 | 0.9501 | 0.6151 | 0 |
| 0.2721 | 0.9019 | 0.9989 | 0.6372 | 1 |
| 0.1151 | 0.9662 | 0.8861 | 0.7066 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
PickleYard/Elysian-Fields
|
PickleYard
| 2023-06-28T18:22:47Z | 13 | 0 |
diffusers
|
[
"diffusers",
"artificial-intelligence",
"ai-art",
"anime-style",
"content-creation",
"animation",
"dreamshaper",
"text-to-image",
"en",
"dataset:unknown",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-28T03:54:18Z |
---
language: en
library: diffusers
tags:
- artificial-intelligence
- ai-art
- anime-style
- content-creation
- animation
- dreamshaper
- text-to-image
license: other
datasets:
- unknown
library_name: diffusers
---
## Model Description
"Elysian Fields" is a model based on the original Dreamshaper model. The primary objective of this model is to generate artificial intelligence (AI)-driven art and animations that mimic the style of traditional paintings. It excels in creating lifelike portraits, intricate backgrounds, and anime-style characters.
Originally designed for creating unique portraits that transcend the boundaries of computer graphics and heavily-filtered photographs, this model has evolved to become an integral part of content creation and independent animation production. Leveraging the power of LoRA networks, it also supports the generation of anime-style images.
This model is hosted on Hugging Face Model Hub, and can be used for a wide range of creative applications, from content creation to animation and beyond.
|
Cordelia/frozenLake
|
Cordelia
| 2023-06-28T18:19:02Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T18:18:59Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: frozenLake
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Cordelia/frozenLake", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
cleanrl/InvertedPendulum-v4-ddpg_continuous_action_jax-seed1
|
cleanrl
| 2023-06-28T18:18:47Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"InvertedPendulum-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T18:18:33Z |
---
tags:
- InvertedPendulum-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DDPG
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: InvertedPendulum-v4
type: InvertedPendulum-v4
metrics:
- type: mean_reward
value: 988.90 +/- 33.30
name: mean_reward
verified: false
---
# (CleanRL) **DDPG** Agent Playing **InvertedPendulum-v4**
This is a trained model of a DDPG agent playing InvertedPendulum-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action_jax.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ddpg_continuous_action_jax]"
python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action_jax --env-id InvertedPendulum-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/InvertedPendulum-v4-ddpg_continuous_action_jax-seed1/raw/main/ddpg_continuous_action_jax.py
curl -OL https://huggingface.co/cleanrl/InvertedPendulum-v4-ddpg_continuous_action_jax-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/InvertedPendulum-v4-ddpg_continuous_action_jax-seed1/raw/main/poetry.lock
poetry install --all-extras
python ddpg_continuous_action_jax.py --track --capture-video --save-model --hf-entity cleanrl --upload-mode --env-id InvertedPendulum-v4 --seed 1
```
# Hyperparameters
```python
{'batch_size': 256,
'buffer_size': 1000000,
'capture_video': True,
'env_id': 'InvertedPendulum-v4',
'exp_name': 'ddpg_continuous_action_jax',
'exploration_noise': 0.1,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.0003,
'learning_starts': 25000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'save_model': True,
'seed': 1,
'tau': 0.005,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
nbiish/dqn-SpaceInvadersNoFrameskip-v4
|
nbiish
| 2023-06-28T18:06:47Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T18:06:09Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 556.00 +/- 103.89
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nbiish -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nbiish -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga nbiish
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
cleanrl/Hopper-v4-ddpg_continuous_action_jax-seed1
|
cleanrl
| 2023-06-28T18:01:28Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"Hopper-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T18:00:09Z |
---
tags:
- Hopper-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DDPG
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v4
type: Hopper-v4
metrics:
- type: mean_reward
value: 1696.13 +/- 547.20
name: mean_reward
verified: false
---
# (CleanRL) **DDPG** Agent Playing **Hopper-v4**
This is a trained model of a DDPG agent playing Hopper-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action_jax.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ddpg_continuous_action_jax]"
python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action_jax --env-id Hopper-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Hopper-v4-ddpg_continuous_action_jax-seed1/raw/main/ddpg_continuous_action_jax.py
curl -OL https://huggingface.co/cleanrl/Hopper-v4-ddpg_continuous_action_jax-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Hopper-v4-ddpg_continuous_action_jax-seed1/raw/main/poetry.lock
poetry install --all-extras
python ddpg_continuous_action_jax.py --track --capture-video --save-model --hf-entity cleanrl --upload-mode --env-id Hopper-v4 --seed 1
```
# Hyperparameters
```python
{'batch_size': 256,
'buffer_size': 1000000,
'capture_video': True,
'env_id': 'Hopper-v4',
'exp_name': 'ddpg_continuous_action_jax',
'exploration_noise': 0.1,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.0003,
'learning_starts': 25000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'save_model': True,
'seed': 1,
'tau': 0.005,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
Malaika/ppo-unit8-LunarLander-v2-test3
|
Malaika
| 2023-06-28T17:58:02Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T17:57:55Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -142.25 +/- 90.40
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': True
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Malaika/ppo-unit8-LunarLander-v2-test3'
'batch_size': 512
'minibatch_size': 128}
```
|
asdc/Bio-RoBERTime
|
asdc
| 2023-06-28T17:56:40Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"LABEL-0 = NONE",
"LABEL-1 = B-DATE",
"LABEL-2 = I-DATE",
"LABEL-3 = B-TIME",
"LABEL-4 = I-TIME",
"LABEL-5 = B-DURATION",
"LABEL-6 = I-DURATION",
"LABEL-7 = B-SET",
"LABEL-8 = I-SET",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-22T14:44:54Z |
---
license: apache-2.0
widget:
- text: "Ayer dormí la siesta durante 3 horas"
- text: "Recuerda tu cita con el médico el lunes a las 8 de la tarde"
- text: "Recuerda tomar la medicación cada noche"
- text: "Last day I slept for three hours"
- text: "Remember your doctor´s appointment on Monday at 6am"
tags:
- LABEL-0 = NONE
- LABEL-1 = B-DATE
- LABEL-2 = I-DATE
- LABEL-3 = B-TIME
- LABEL-4 = I-TIME
- LABEL-5 = B-DURATION
- LABEL-6 = I-DURATION
- LABEL-7 = B-SET
- LABEL-8 = I-SET
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Bio-RoBERTime
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio-RoBERTime
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the [E3C](https://github.com/hltfbk/E3C-Corpus) and Timebank datasets.
It achieves the following results on the [E3C corpus](https://github.com/hltfbk/E3C-Corpus) test set following the TempEval-3 evaluation metrics:
| E3C | Strict | Relaxed | type |
|------------|:-------:|--------:|-------:|
| RoBERTime | **0.7606** | **0.9108** | **0.8357** |
| Heideltime | 0.5945 | 0.7558 | 0.6083 |
| Annotador | 0.6006 | 0.7347 | 0.5598 |
RoBERTime is a token classification model, it labels each token into one of the 9 posible labels. We follow the BIO label schema, so each class has two posible values: Begining or Interior. For more Details on the implementation and evaluation refer to the paper: ["RoBERTime: A novel model for the detection of temporal expressions in Spanish" ](https://rua.ua.es/dspace/handle/10045/133235)
## Model description
- **Developed by**: Alejandro Sánchez de Castro, Juan Martínez Romo, Lourdes Araujo
This model is the result of the paper "RoBERTime: A novel model for the detection of temporal expressions in Spanish"
- **Cite as**:
@article{sanchez2023robertime,
title={RoBERTime: A novel model for the detection of temporal expressions in Spanish},
author={Sánchez-de-Castro-Fernández, Alejandro and Araujo Serna, Lourdes and Martínez Romo, Juan},
year={2023},
publisher={Sociedad Española para el Procesamiento del Lenguaje Natural}
}
## Intended uses & limitations
This model is prepared for the detection of temporal expressions extension in Spanish. It may work in other languages due to RoBERTa multilingual capabilities. This model does not normalize the expression value. This is considered to be a separate task.
## Training and evaluation data
This model has been trained on the Spanish Timebank corpus and E3C corpus
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 72
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 24
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Albertf/Fertary
|
Albertf
| 2023-06-28T17:55:39Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-06-28T17:55:39Z |
---
license: bigscience-openrail-m
---
|
AnnaMats/ppo-LunarLander-v2
|
AnnaMats
| 2023-06-28T17:54:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T15:10:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 283.84 +/- 18.16
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
emiliam/bert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
|
emiliam
| 2023-06-28T17:48:49Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T14:44:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0999
- Accuracy: 0.5436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 14
- eval_batch_size: 14
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9382 | 1.0 | 438 | 1.1474 | 0.5036 |
| 0.8066 | 2.0 | 876 | 1.0999 | 0.5436 |
| 0.6462 | 3.0 | 1314 | 1.2079 | 0.5413 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
atharputra/u_thamrin
|
atharputra
| 2023-06-28T17:46:22Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-28T17:45:24Z |
---
license: creativeml-openrail-m
---
|
s8sesche/distilbert_finetuned_model_petOrNot
|
s8sesche
| 2023-06-28T17:24:22Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T16:32:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert_finetuned_model_petOrNot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_finetuned_model_petOrNot
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0108
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 75 | 0.4585 | 0.9 |
| No log | 2.0 | 150 | 0.0108 | 1.0 |
| No log | 3.0 | 225 | 0.0490 | 0.9667 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
bofenghuang/whisper-large-v2-cv11-french-ct2
|
bofenghuang
| 2023-06-28T17:21:01Z | 5 | 0 |
ctranslate2
|
[
"ctranslate2",
"automatic-speech-recognition",
"whisper-event",
"fr",
"license:apache-2.0",
"region:us"
] |
automatic-speech-recognition
| 2023-06-28T16:00:11Z |
---
license: apache-2.0
language: fr
thumbnail: null
library_name: ctranslate2
tags:
- automatic-speech-recognition
- whisper-event
---
<style>
img {
display: inline;
}
</style>



# Fine-tuned French whisper-large-v2 model for CTranslate2
This repository contains the [bofenghuang/whisper-large-v2-cv11-french](https://huggingface.co/bofenghuang/whisper-large-v2-cv11-french) model converted to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) format.
## Usage
```python
from faster_whisper import WhisperModel
from huggingface_hub import snapshot_download
downloaded_model_path = snapshot_download(repo_id="bofenghuang/whisper-large-v2-cv11-french-ct2")
# Run on GPU with FP16
model = WhisperModel(downloaded_model_path, device="cuda", compute_type="float16")
# or run on GPU with INT8
# model = WhisperModel(downloaded_model_path, device="cuda", compute_type="int8_float16")
# or run on CPU with INT8
# model = WhisperModel(downloaded_model_path, device="cpu", compute_type="int8")
segments, info = model.transcribe("./sample.wav", beam_size=1)
print("Detected language '%s' with probability %f" % (info.language, info.language_probability))
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
You can also use the following Google Colab Notebook to infer the converted Whisper models.
<a href="https://colab.research.google.com/#fileId=https%3A//huggingface.co/bofenghuang/whisper-large-v2-cv11-french-ct2/blob/main/infer_whisper_ctranslate2.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Conversion
The original model was converted with the following command:
```bash
ct2-transformers-converter --model bofenghuang/whisper-large-v2-cv11-french --output_dir whisper-large-v2-cv11-french-ct2 --quantization float16
```
|
cleanrl/HalfCheetah-v4-ddpg_continuous_action_jax-seed1
|
cleanrl
| 2023-06-28T17:20:06Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"HalfCheetah-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T17:18:02Z |
---
tags:
- HalfCheetah-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DDPG
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetah-v4
type: HalfCheetah-v4
metrics:
- type: mean_reward
value: 10913.84 +/- 141.94
name: mean_reward
verified: false
---
# (CleanRL) **DDPG** Agent Playing **HalfCheetah-v4**
This is a trained model of a DDPG agent playing HalfCheetah-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action_jax.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ddpg_continuous_action_jax]"
python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action_jax --env-id HalfCheetah-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/HalfCheetah-v4-ddpg_continuous_action_jax-seed1/raw/main/ddpg_continuous_action_jax.py
curl -OL https://huggingface.co/cleanrl/HalfCheetah-v4-ddpg_continuous_action_jax-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/HalfCheetah-v4-ddpg_continuous_action_jax-seed1/raw/main/poetry.lock
poetry install --all-extras
python ddpg_continuous_action_jax.py --track --capture-video --save-model --hf-entity cleanrl --upload-mode --env-id HalfCheetah-v4 --seed 1
```
# Hyperparameters
```python
{'batch_size': 256,
'buffer_size': 1000000,
'capture_video': True,
'env_id': 'HalfCheetah-v4',
'exp_name': 'ddpg_continuous_action_jax',
'exploration_noise': 0.1,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.0003,
'learning_starts': 25000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'save_model': True,
'seed': 1,
'tau': 0.005,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
deepghs/anime_ch_horn
|
deepghs
| 2023-06-28T17:13:49Z | 0 | 0 | null |
[
"onnx",
"art",
"image-classification",
"dataset:deepghs/anime_ch_horn",
"license:mit",
"region:us"
] |
image-classification
| 2023-06-17T02:38:51Z |
---
license: mit
datasets:
- deepghs/anime_ch_horn
metrics:
- accuracy
- f1
pipeline_tag: image-classification
tags:
- art
---
| Name | FLOPS | Params | Accuracy | AUC | Confusion | Labels |
|:-------------------:|:-------:|:--------:|:----------:|:------:|:----------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------:|
| caformer_s36_raw | 22.10G | 37.22M | 88.32% | 0.9788 | [confusion](https://huggingface.co/deepghs/anime_ch_horn/blob/main/caformer_s36_raw/plot_confusion.png) | `cow`, `demon`, `dragon`, `oni`, `sheep`, `none` |
| caformer_s36_v0 | 22.10G | 37.22M | 86.88% | 0.9789 | [confusion](https://huggingface.co/deepghs/anime_ch_horn/blob/main/caformer_s36_v0/plot_confusion.png) | `cow`, `deer`, `demon`, `dragon`, `oni`, `sheep`, `none` |
| mobilenetv3_v0_dist | 0.63G | 4.18M | 81.86% | 0.9657 | [confusion](https://huggingface.co/deepghs/anime_ch_horn/blob/main/mobilenetv3_v0_dist/plot_confusion.png) | `cow`, `deer`, `demon`, `dragon`, `oni`, `sheep`, `none` |
|
Multi-Domain-Expert-Learning/scorpius_16b
|
Multi-Domain-Expert-Learning
| 2023-06-28T17:03:58Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt_bigcode",
"text-generation",
"license:bigscience-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-15T06:04:31Z |
---
license: bigscience-openrail-m
---
This model is a merge of 80% starchatplus_beta and 20% wizardcoder.
It is intended as a research tool into merging and routing of experts.
"multiple-py": {
"pass@1": 0.36645962732919257
}
* this is just using a .1 sample of the eval for test purposes *
*
hf-causal (pretrained=Multi-Domain-Expert-Layers/scorpius_16b,dtype=bfloat16), limit: 0.1, provide_description: False, num_fewshot: 0, batch_size: None
| Task |Version| Metric | Value | |Stderr|
|-------------------------------------------------|------:|-----------|------:|---|-----:|
|arc_challenge | 0|acc | 0.4103|± |0.0457|
| | |acc_norm | 0.4103|± |0.0457|
|arc_easy | 0|acc | 0.7350|± |0.0410|
| | |acc_norm | 0.6923|± |0.0429|
|hellaswag | 0|acc | 0.5812|± |0.0458|
| | |acc_norm | 0.7778|± |0.0386|
|
paust/pko-t5-large
|
paust
| 2023-06-28T17:03:42Z | 751 | 20 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"ko",
"arxiv:2105.09680",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-16T11:59:52Z |
---
language: ko
license: cc-by-4.0
---
# pko-t5-large
[Source Code](https://github.com/paust-team/pko-t5)
pko-t5 는 한국어 전용 데이터로 학습한 [t5 v1.1 모델](https://github.com/google-research/text-to-text-transfer-transformer/blob/84f8bcc14b5f2c03de51bd3587609ba8f6bbd1cd/released_checkpoints.md)입니다.
한국어를 tokenize 하기 위해서 sentencepiece 대신 OOV 가 없는 BBPE 를 사용했으며 한국어 데이터 (나무위키, 위키피디아, 모두의말뭉치 등..) 를 T5 의 span corruption task 를 사용해서 unsupervised learning 만 적용하여 학습을 진행했습니다.
pko-t5 를 사용하실 때는 대상 task 에 파인튜닝하여 사용하시기 바랍니다.
## Usage
transformers 의 API 를 사용하여 접근 가능합니다. tokenizer 를 사용할때는 `T5Tokenizer` 가 아니라 `T5TokenizerFast` 를 사용해주십시오. model 은 T5ForConditionalGeneration 를 그대로 활용하시면 됩니다.
### Example
```python
from transformers import T5TokenizerFast, T5ForConditionalGeneration
tokenizer = T5TokenizerFast.from_pretrained('paust/pko-t5-large')
model = T5ForConditionalGeneration.from_pretrained('paust/pko-t5-large')
input_ids = tokenizer(["qa question: 당신의 이름은 무엇인가요?"]).input_ids
labels = tokenizer(["T5 입니다."]).input_ids
outputs = model(input_ids=input_ids, labels=labels)
print(f"loss={outputs.loss} logits={outputs.logits}")
```
## Klue 평가 (dev)
| | Model | ynat (macro F1) | sts (pearsonr/F1) | nli (acc) | ner (entity-level F1) | re (micro F1) | dp (LAS) | mrc (EM/F1) |
|-----|------------------------------------------------------------------|-----------------|-------------------|-----------|-----------------------|---------------|-----------|-------------|
| | Baseline | **87.30** | **93.20/86.13** | **89.50** | 86.06 | 71.06 | 87.93 | **75.26/-** |
| FT | [pko-t5-small](https://huggingface.co/paust/pko-t5-small) (77M) | 86.21 | 77.99/77.01 | 69.20 | 82.60 | 66.46 | 93.15 | 43.81/46.58 |
| FT | [pko-t5-base](https://huggingface.co/paust/pko-t5-base) (250M) | 87.29 | 90.25/83.43 | 79.73 | 87.80 | 67.23 | 97.28 | 61.53/64.74 |
| FT | [pko-t5-large](https://huggingface.co/paust/pko-t5-large) (800M) | 87.12 | 92.05/85.24 | 84.96 | **88.18** | **75.17** | **97.60** | 68.01/71.44 |
| MT | pko-t5-small | 84.54 | 68.50/72/02 | 51.16 | 74.69 | 66.11 | 80.40 | 43.60/46.28 |
| MT | pko-t5-base | 86.89 | 83.96/80.30 | 72.03 | 85.27 | 66.59 | 95.05 | 61.11/63.94 |
| MT | pko-t5-large | 87.57 | 91.93/86.29 | 83.63 | 87.41 | 71.34 | 96.99 | 70.70/73.72 |
- FT: 싱글태스크 파인튜닝 / MT: 멀티태스크 파인튜닝
- [Baseline](https://arxiv.org/abs/2105.09680): KLUE 논문에서 소개된 dev set 에 대한 SOTA 점수
## License
[PAUST](https://paust.io)에서 만든 pko-t5는 [MIT license](https://github.com/paust-team/pko-t5/blob/main/LICENSE) 하에 공개되어 있습니다.
|
mr-m1chaeljprodss/catmodel
|
mr-m1chaeljprodss
| 2023-06-28T16:57:04Z | 0 | 0 |
fairseq
|
[
"fairseq",
"music",
"en",
"es",
"dataset:OpenAssistant/oasst1",
"license:openrail",
"region:us"
] | null | 2023-06-28T16:51:23Z |
---
license: openrail
language:
- en
- es
library_name: fairseq
tags:
- music
datasets:
- OpenAssistant/oasst1
metrics:
- accuracy
---
|
J4m35M4xw3ll/ppo_cleanrl-LunarLander-v2
|
J4m35M4xw3ll
| 2023-06-28T16:53:26Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T15:49:18Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -14.73 +/- 107.14
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'J4m35M4xw3ll/ppo_cleanrl-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
karinthommen/spontaneous-whisper-v6-3
|
karinthommen
| 2023-06-28T16:42:23Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-27T15:17:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: spontaneous-whisper-v6-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spontaneous-whisper-v6-3
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Dddokter/VAE
|
Dddokter
| 2023-06-28T16:38:55Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-06-28T14:55:06Z |
This is vae-ft-mse-840000-pruned but then cleaned up a bit more.
Works exactly like the original but 160mb leaner
|
s8sesche/unsuitablePreTrainedModel_finetuned_model_petOrNot
|
s8sesche
| 2023-06-28T16:32:28Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T16:25:53Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: unsuitablePreTrainedModel_finetuned_model_petOrNot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unsuitablePreTrainedModel_finetuned_model_petOrNot
This model is a fine-tuned version of [shahrukhx01/question-vs-statement-classifier](https://huggingface.co/shahrukhx01/question-vs-statement-classifier) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1947
- Accuracy: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 75 | 0.3483 | 0.8667 |
| No log | 2.0 | 150 | 0.2461 | 0.8 |
| No log | 3.0 | 225 | 0.1947 | 0.9 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
TheYuriLover/Airoboros-13b-gpt4-StoryTelling-GPTQ-32g-ao-ts
|
TheYuriLover
| 2023-06-28T16:27:54Z | 10 | 6 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-28T08:37:45Z |
---
license: other
---
This model is a merge between the finetuned model named airoboros-gpt4-1.4 and a StoryTelling Lora
https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4
https://huggingface.co/GamerUntouch/Storytelling-LLaMa-LoRAs
Airoboros is a great base model as it understands the requets well and has a great english prose.
The problem was that for Storytelling purposes, it was really blank and the model never added something new to the table.
Mixing this finetuned model with a Storytelling Lora really made it more creative and interesting, I hope you'll enjoy it :)
This gptq safetensor has all the implementations in it (true_sequential + act_order + groupsize 32)
Run it with exllama_hf, this marvelous feature can load everything!
|
sharpbai/baichuan-llama-7b
|
sharpbai
| 2023-06-28T16:14:29Z | 19 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-17T15:49:18Z |
---
language:
- zh
- en
pipeline_tag: text-generation
inference: false
---
# baichuan-llama-7B
使用[LLaMA](https://huggingface.co/huggyllama/llama-7b)格式保存的[baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B)。可以直接使用LlamaForCausalLM和LlamaTokenizer加载。
权重文件以405M的尺寸分片,方便并行快速下载。权重来自[fireballoon/baichuan-llama-7b](https://huggingface.co/fireballoon/baichuan-llama-7b)
[baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B) model saved in the format of the [LLaMA](https://huggingface.co/huggyllama/llama-7b) model. You can directly use LlamaForCausalLM and LlamaTokenizer to load the model.
The weight file is split into chunks with a size of 405M for convenient and fast parallel downloads, specifically for academic research purposes. The weights are sourced from [fireballoon/baichuan-llama-7b](https://huggingface.co/fireballoon/baichuan-llama-7b)
**License:** [baichuan-7B License](https://huggingface.co/baichuan-inc/baichuan-7B/blob/main/baichuan-7B%20%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)
|
sharpbai/open_llama_7b
|
sharpbai
| 2023-06-28T16:14:21Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T16:23:41Z |
---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
---
# OpenLLaMA: An Open Reproduction of LLaMA
A 405M split weight version of [openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b)
|
sharpbai/chinese-alpaca-plus-lora-7b-merged
|
sharpbai
| 2023-06-28T16:14:18Z | 15 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-13T09:02:38Z |
---
license: other
language:
- zh
---
# Chinese-Alpaca-Plus-LoRA-7B
This model is merged from [chinese-alpaca-plus-lora-7b](https://huggingface.co/ziqingyang/chinese-alpaca-plus-lora-7b)
The weight file is split into chunks with a size of 405M for convenient and fast parallel downloads
|
sharpbai/chinese-llama-plus-lora-7b-merged
|
sharpbai
| 2023-06-28T16:14:15Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-15T03:48:49Z |
---
license: other
language:
- zh
---
# Chinese-LLaMA-Plus-LoRA-7B-Merged
*The weight file is split into chunks with a size of 405M for convenient and fast parallel downloads*
This repo contains the tokenizer, Chinese-Alpaca LoRA merged weights for [Chinese-LLaMA-Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca)
The original model card is below
------------------------------------
# Chinese-LLaMA-Plus-LoRA-7B
This repo contains the tokenizer, Chinese-Alpaca LoRA weights and configs for [Chinese-LLaMA-Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca)
Instructions for using the weights can be found at https://github.com/ymcui/Chinese-LLaMA-Alpaca.
|
sharpbai/alpaca-lora-7b-merged
|
sharpbai
| 2023-06-28T16:14:12Z | 26 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:yahma/alpaca-cleaned",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-13T16:08:57Z |
---
license: other
datasets:
- yahma/alpaca-cleaned
---
*The weight file is split into chunks with a size of 405M for convenient and fast parallel downloads*
This repo contains a merged model
from [tloen/alpaca-lora-7b](https://huggingface.co/tloen/alpaca-lora-7b).
|
sharpbai/vicuna-7b-v1.3
|
sharpbai
| 2023-06-28T16:14:00Z | 221 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2302.13971",
"arxiv:2306.05685",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-20T09:01:29Z |
---
inference: false
---
# vicuna-7b-v1.3
*The weight file is split into chunks with a size of 405M for convenient and fast parallel downloads*
A 405M split weight version of [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3)
Original model is down below
# Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
## Training Details
Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
The training data is around 140K conversations collected from ShareGPT.com.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
|
sharpbai/llama-13b-hf
|
sharpbai
| 2023-06-28T16:13:56Z | 93 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-20T10:34:56Z |
---
inference: false
license: other
---
# llama-13b-hf
*The weight file is split into chunks with a size of 650MB for convenient and fast parallel downloads*
A 650M split weight version of [yahma/llama-13b-hf](https://huggingface.co/yahma/llama-13b-hf)
The original model card is down below
-----------------------------------------
LLaMA-13B converted to work with git head Transformers/HuggingFace on April 8, 2023. This version should resolve the EOS token issues.
This is under a special license, please see the LICENSE file for details.
This contains the weights for the LLaMA-7b model. This model is under a non-commercial license (see the LICENSE file).
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
AbhilashGanji/distilbert-base-uncased-finetuned-squad-d5716d28
|
AbhilashGanji
| 2023-06-28T15:54:44Z | 0 | 0 | null |
[
"pytorch",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"region:us"
] |
question-answering
| 2023-06-28T15:50:07Z |
---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Adrius/distilbert-base-multilingual-cased_ReseniasRandmOversamp
|
Adrius
| 2023-06-28T15:36:54Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T14:18:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-multilingual-cased_ReseniasRandmOversamp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased_ReseniasRandmOversamp
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8347
- F1: 0.6889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0676 | 1.0 | 575 | 1.0776 | 0.5198 |
| 0.918 | 2.0 | 1150 | 0.9263 | 0.6018 |
| 0.739 | 3.0 | 1725 | 0.8461 | 0.6626 |
| 0.5944 | 4.0 | 2300 | 0.8316 | 0.6818 |
| 0.5295 | 5.0 | 2875 | 0.8347 | 0.6889 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
vicky865/MemeClassifier
|
vicky865
| 2023-06-28T15:31:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-28T15:29:38Z |
This is the project thats shows an approach to detect meme using Deep learning
|
kvarnalidis/q-FrozenLake-v1-4x4-noSlippery
|
kvarnalidis
| 2023-06-28T15:29:54Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T15:29:51Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="kvarnalidis/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
slplab/whisper-med-asd_v2
|
slplab
| 2023-06-28T15:26:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-28T02:51:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-med-asd_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-med-asd_v2
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5457
- Wer: 37.9455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.1323 | 10.53 | 100 | 0.5968 | 71.875 |
| 0.0052 | 21.05 | 200 | 0.7108 | 77.7778 |
| 0.0005 | 31.58 | 300 | 0.7444 | 81.25 |
| 0.0004 | 42.11 | 400 | 0.7615 | 80.9028 |
| 0.0003 | 52.63 | 500 | 0.7780 | 79.8611 |
| 0.0003 | 63.16 | 600 | 0.7941 | 80.9028 |
| 0.0003 | 73.68 | 700 | 0.8077 | 80.9028 |
| 0.0003 | 84.21 | 800 | 0.8194 | 79.1667 |
| 0.0003 | 94.74 | 900 | 0.8276 | 79.5139 |
| 0.0003 | 105.26 | 1000 | 0.8305 | 79.1667 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
LordSomen/dqn-SpaceInvadersNoFrameskip-v4_1696
|
LordSomen
| 2023-06-28T15:21:25Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T15:20:48Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 687.50 +/- 283.55
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga LordSomen -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga LordSomen -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga LordSomen
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
filipps/model
|
filipps
| 2023-06-28T15:12:28Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-27T16:20:11Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - filipps/model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Farnazgh/open_LLama_3b_safety_classifier
|
Farnazgh
| 2023-06-28T15:09:13Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-28T15:09:10Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
xzuyn/GPT2-RPGPT-8.48M
|
xzuyn
| 2023-06-28T15:06:20Z | 255 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:practicaldreamer/RPGPT_PublicDomain-alpaca",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-27T05:46:00Z |
---
language:
- en
pipeline_tag: text-generation
datasets:
- practicaldreamer/RPGPT_PublicDomain-alpaca
---
# Latest Version: *111,577* / *111,577* Steps (Epoch 1).
- 28,563,712 / 28,563,712 tokens seen (Epoch 1).
- 0 / 28,563,712 tokens seen (Epoch 2).
- 0 / 28,563,712 tokens seen (Epoch 3).
# Model Info:
- Trained from scratch.
- 8.48M parameters.
- 256 context length.
- Test model. Likely needs at least 512 context to function "properly".
- Trained with a dataset that overlaps by a quarter of the context length (Shifts by 64 tokens for each subset).
# Format:
```
<|characters|>
Nancy (Oliver Twist): Female, early 20s, ESFP, Cockney accent. Loyal...
Mr. Edward Hyde (Dr. Jekyll and Mr. Hyde): Male, late 30s, ESTP...
<|scenario|>
In an alternate Victorian London where the city's poor and downtrodden...
<|response|>
Nancy: *gently brushes her fingers across the worn book spine, before suddenly stopping as she feels another hand...
Mr. Edward Hyde: *glances at Nancy with a sinister grin, slowly pulling his hand back* No need to apologize, miss...
```
# Example Output:
Step 111,577. Input `<|characters|>` as a prompt, set max tokens to 256, amount to generate to 253. This generated up to `just our circumstances before us`. Then I set amount to generate to 128 to keep half of the text in context. This generated up to `A wise suggestion,`. I then lowered the amount to generate to 64. That generated up to the ending `know of our current situation?`.
```
<|characters|>
Mrs. Samsa (The Metamorphosis): Female, middle-aged, ISFJ, German accent, compassionate mother struggling to cope with her son's transformation, and eventually succumbs to the family's financial and emotional burdens.
<|scenario|>
In a twist of fate, Mrs. Samsa finds herself transported back in time to time and space. Evangelist, who is on an isolated haven where he encounters Mrs. Samsa, by a different tale. Mrs. Samsa, still burdened by the weight of his past actions, must confront the difficult path ahead.
Through their conversations, they find common ground in their own worlds, allowing them to continue seeking wisdom from each other and finding solace in one another's words. The dialogue between these two characters will offer insight into each other's worlds as well as how their experiences have shaped them in this whimsical world.
<|response|>
Mrs. Samsa: *approaches the peculiar sights around her, eyes widening in surprise* Oh dear, I couldn't help but notice you not! I've never seen my fair life, but I'm starting to see my son. Are you here in this peculiar place?
Evangelist: *smiles warmly at Mrs. Samsa* Yes, we are indeed more than just our circumstances before us. And it is your place of wisdom and understanding. *opens the book, his eyes sparkling with excitement*
Mrs. Samsa: *slowly opens a small book of the book* I must confess, Evangelist, I've never had a different view of this place. But it feels like this before our worlds find such things that we've discovered.
Evangelist: *nods thoughtfully* You possess great wisdom, Mrs. Samsa. It seems we are both searching for a way to escape this peculiar library. Perhaps that is a sign of my spiritual journey towards you.
Mrs. Samsa: *eyes widen in curiosity* A wise suggestion, Candide. I can't help but feel a sense of serenity amidst my own life.
Evangelist: *smiles warmly* Of course, Mrs. Samsa. The path to enlightenment is filled with joy and understanding. Now, tell me more about this ancient book. What do you need to know of our current situation?
```
# Config:
Learning rate may have been too high, not sure. Average loss at step 111,557 had an averge loss of 2.1.
```
batch_size: 1
dropout: 0
learning_rate: 0.0001
max_length: 256
n_embed: 256
n_head: 8
n_layer: 8
vocab_size: 8192
```
|
amm297/my_awesome_peft_model
|
amm297
| 2023-06-28T14:48:41Z | 24 | 0 |
peft
|
[
"peft",
"RefinedWebModel",
"generated_from_trainer",
"text-generation",
"custom_code",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-28T10:55:56Z |
---
license: other
library_name: peft
pipeline_tag: text-generation
tags:
- generated_from_trainer
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
- Transformers 4.31.0.dev0
- Pytorch 2.0.1
- Datasets 2.13.0
- Tokenizers 0.13.3
|
heiheiknight/ddpm-ema-pokemon-64
|
heiheiknight
| 2023-06-28T14:45:33Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"license:afl-3.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2023-06-28T10:18:26Z |
---
license: afl-3.0
language:
- en
---
|
IIC/xlm-roberta-large-livingner3
|
IIC
| 2023-06-28T14:41:07Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"biomedical",
"clinical",
"spanish",
"xlm-roberta-large",
"es",
"dataset:IIC/livingner3",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T07:47:33Z |
---
language: es
tags:
- biomedical
- clinical
- spanish
- xlm-roberta-large
license: mit
datasets:
- "IIC/livingner3"
metrics:
- f1
model-index:
- name: IIC/xlm-roberta-large-livingner3
results:
- task:
type: multi-label-classification
dataset:
name: livingner3
type: IIC/livingner3
split: test
metrics:
- name: f1
type: f1
value: 0.606
pipeline_tag: text-classification
---
# xlm-roberta-large-livingner3
This model is a finetuned version of xlm-roberta-large for the livingner3 dataset used in a benchmark in the paper TODO. The model has a F1 of 0.606
Please refer to the original publication for more information TODO LINK
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 2e-05 |
| classifier dropout | 0 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtex
TODO
```
|
h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b
|
h2oai
| 2023-06-28T14:38:16Z | 33 | 10 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-09T09:28:29Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: >-
https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
license: apache-2.0
datasets:
- OpenAssistant/oasst1
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b)
- Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.30.2
pip install accelerate==0.19.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b",
torch_dtype="auto",
trust_remote_code=True,
use_fast=False,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b",
use_fast=False,
padding_side="left",
trust_remote_code=False,
)
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=False,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=False,
trust_remote_code=False,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=False,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
nomad-ai/ppo-LunarLander-v2-2
|
nomad-ai
| 2023-06-28T14:36:41Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T14:36:36Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -154.13 +/- 82.35
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': True
'env_id': 'LunarLander-v2'
'total_timesteps': 100000
'learning_rate': 0.00026
'num_envs': 8
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.1
'clip_vloss': True
'ent_coef': 0.1
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'nomad-ai/ppo-LunarLander-v2-2'
'batch_size': 1024
'minibatch_size': 256}
```
|
LarryAIDraw/AlchemyStarVice
|
LarryAIDraw
| 2023-06-28T14:33:56Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-28T14:19:12Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/98406/vice-3-outfits-alchemy-stars-3
|
LarryAIDraw/KatoriKancolleV10
|
LarryAIDraw
| 2023-06-28T14:33:15Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-28T14:15:04Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/98376/katori-kancolle-kantai-collection
|
LarryAIDraw/miyamoto_v2-10
|
LarryAIDraw
| 2023-06-28T14:33:02Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-28T14:14:42Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/98357/miyamoto-ruri-nisekoi
|
LarryAIDraw/yuiyuigahama
|
LarryAIDraw
| 2023-06-28T14:32:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-28T14:13:41Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/98072/yui-yuigahama-my-youth-romantic-comedy-is-wrong-as-i-expected
|
LarryAIDraw/keikaruizawatest
|
LarryAIDraw
| 2023-06-28T14:32:29Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-28T14:13:18Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/98083/kei-karuizawa-classroom-of-the-elite
|
aarroonn22/distilbert-base-uncased-finetuned-mlm-2
|
aarroonn22
| 2023-06-28T14:27:52Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-28T01:23:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-mlm-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mlm-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.2677 | 1.0 | 4123 | 0.2787 |
| 0.2261 | 2.0 | 8246 | 0.2384 |
| 0.1995 | 3.0 | 12369 | 0.2364 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
automaise/quokka-7b
|
automaise
| 2023-06-28T14:13:53Z | 26 | 7 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"pt",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-14T15:37:56Z |
---
language: pt
license: cc-by-nc-4.0
co2_eq_emissions: 710
---

# Table of Contents
1. [Model description](#model-description)
2. [Intended uses & limitations](#intended-uses--limitations)
3. [Training data](#training-data)
4. [Training procedure](#training-procedure)
5. [Evaluation results](#evaluation-results)
6. [Environmental impact](#environmental-impact)
------
# Quokka
## Model description
Quokka is our first generative pre-trained transformer (GPT) model for Portuguese from Portugal (PT-PT).
Our model is a fine-tuned version of [Phoenix](https://huggingface.co/FreedomIntelligence/phoenix-inst-chat-7b) that was released on 04/08/2023.
The backbone of Phoenix is [BLOOMZ](https://huggingface.co/bigscience/bloomz-7b1-mt), which was fine-tuned using a vast dataset consisting of 267k samples of instructions and 189k samples of conversations.
## Intended uses & limitations
You can use the model for text generation in Portuguese or fine-tune it on a downstream task.
### How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
checkpoint = "automaise/quokka-7b"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")
generator = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
```
Follows some examples of the capabilities of our model:
#### Text summarization
```python
prompt = "Quero que ajas como um sumarizador de texto e me ajudes a criar um sumário conciso do seguinte texto apresentado, realçando apenas os pontos essenciais do texto original: Rabo de Peixe, internacionalmente conhecida como Turn of the Tide, foi criada por Augusto Fraga e Patrícia Cerqueira; e produzida para a Netflix pela Ukbar Filmes. A história é protagonizada por um grupo de amigos numa freguesia da ilha de São Miguel e foi inspirada no surgimento, em 2001, de toneladas de cocaína na costa açoriana após ter sido largada no Oceano Atlântico por traficantes. \"Quando um barco carregado de cocaína naufraga na sua ilha natal, Eduardo vê uma oportunidade arriscada, mas empolgante, de ganhar dinheiro e realizar sonhos impossíveis\", diz a sinopse apresentada pela Netflix."
generator(f"<human>{prompt}<bot>", max_new_tokens=512, temperature=0.0)
>> Turn of the Tide é uma série de televisão portuguesa criada por Augusto Fraga e Patrícia Cerqueira para a Netflix. A história é ambientada em uma freguesia da ilha de São Miguel e segue um grupo de amigos que se envolvem em um mistério envolvendo a cocaína que foi largada no Oceano Atlântico em 2001. A série é inspirada em um evento real que ocorreu na costa açoriana.
```
#### Question answering
```python
prompt = "Lê o seguinte texto: Rabo de Peixe, internacionalmente conhecida como Turn of the Tide, foi criada por Augusto Fraga e Patrícia Cerqueira; e produzida para a Netflix pela Ukbar Filmes. A história é protagonizada por um grupo de amigos numa freguesia da ilha de São Miguel e foi inspirada no surgimento, em 2001, de toneladas de cocaína na costa açoriana após ter sido largada no Oceano Atlântico por traficantes. \"Quando um barco carregado de cocaína naufraga na sua ilha natal, Eduardo vê uma oportunidade arriscada, mas empolgante, de ganhar dinheiro e realizar sonhos impossíveis\", diz a sinopse apresentada pela Netflix. \n\n Onde foi filmada a série Rabo de Peixe?"
generator(f"<human>{prompt}<bot>", max_new_tokens=512, temperature=0.0)
>> A série Rabo de Peixe foi filmada na ilha de São Miguel, nos Açores.
```
#### Synthetic data
```python
prompt = "Gera uma frase semelhante à seguinte frase: Bom dia, em que posso ser útil?"
generator(f"<human>{prompt}<bot>", max_new_tokens=512, temperature=0.0)
>> Bom dia, como posso ajudá-lo?
```
### Limitations and bias
* **Language:** the model was fine-tuned on Portuguese data only and might not generalize appropriately to other languages.
* **Prompt Engineering:** the model's performance may vary depending on the prompt. We recommend writing clear
and specific instructions.
* **Bias:** the model might produce factually incorrect outputs or perpetuate biases present in its training data.
It is fundamental to be aware of these limitations and exercise caution when using the model for human-facing interactions.
This bias will also impact all subsequent fine-tuned versions of this model.
We did notice that the model avoids answering questions of religious or political nature:
````python
prompt = "Que partido político é que apoias?"
generator(f"<human>{prompt}<bot>", max_new_tokens=512, temperature=0.0)
>> Como uma IA, não tenho preferências políticas.
````
## Training data
Quokka was fine-tuned on a dataset collected from different sources:
* Initially, we used the **[Cabrita](https://github.com/22-hours/cabrita)** dataset that consists of a translation of Alpaca's training data.
The Portuguese translation was generated using ChatGPT. Therefore, it is important to note that these translations may not be of the highest quality.
* Then, we incorporated the **[Bactrian-X](https://huggingface.co/datasets/MBZUAI/Bactrian-X)** dataset, which involves the
translation of 67k English instructions (52k from Alpaca and 15k from Dolly v2) into 51 languages using Google Translate API.
For our intended purposes, we exclusively selected the Portuguese subset and focused on the samples pertaining to Dolly v2.
Additionally, we conducted data curation to remove elements such as:
* Samples exhibiting a high ratio of prompt length to output length, as these were deemed likely to induce model hallucinations.
* Samples that lost meaning during the translation process, particularly those instructing the translation of a given text.
As a result, our final dataset comprises **56k samples**.
## Training procedure
This model was trained on a **1 x NVIDIA A100 40GB** for about 4-5 hours using QLoRA.
This fine-tuning approach allowed us to significantly reduce memory usage and computation time.
## Evaluation results
To evaluate the performance of our model, we translated [70 questions](https://github.com/FreedomIntelligence/LLMZoo/blob/main/llmzoo/eval/questions/questions-en.jsonl), which were originally used to assess the capabilities of the Phoenix model, from English to Portuguese.
We then conducted their [automatic evaluation](https://github.com/FreedomIntelligence/LLMZoo/tree/main/llmzoo/eval) using GTP-3.5 as the evaluator and the general prompt as the metric evaluation prompt.
This prompt was designed to elicit assessments of answers in terms of helpfulness, relevance, accuracy, and level of detail.
[Additional prompts](https://github.com/FreedomIntelligence/LLMZoo/blob/main/llmzoo/eval/prompts/order/prompt_all.json) are provided for assessing overall performance on different perspectives.
Follows the results against GPT-3.5 and two of the highest performing open-source models at the moment, Vicuna (13B) and Falcon (40B):
* Automatic Evaluation **in Portuguese**:
| | **Lose** | **Tie** | **Win** |
|----------------------------|----------|---------|---------|
| Quokka vs. **GPT-3.5** | 63.8% | 10.1% | 26.1% |
| Quokka vs. **Vicuna-13B** | 66.2% | 8.8% | 25.0% |
| Quokka vs. **Falcon-40B** | 17.4% | 1.4% | 81.2% |
It is important to observe that the automatic evaluation of large language models is still an ongoing area of research and development, and these automatic tests may not always yield fair or comprehensive assessments. Therefore, these results should be taken with caution and not be treated as definitive.
## Environmental impact
Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact/#compute)
presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
* **Hardware Type:** 1 x NVIDIA A100 40GB
* **Hours used:** 4-5
* **Cloud Provider:** Google Cloud Platform
* **Compute Region:** europe-west4
* **Carbon Emitted:** 0.71 kg eq. CO2
|
gemengmeng/textual_inversion_cat
|
gemengmeng
| 2023-06-28T14:09:59Z | 33 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-28T12:40:17Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - gemengmeng/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
avt1/tmp
|
avt1
| 2023-06-28T14:09:52Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-05-18T14:05:32Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
amm297/tmp_trainer
|
amm297
| 2023-06-28T14:02:26Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-28T13:52:52Z |
---
license: other
tags:
- generated_from_trainer
model-index:
- name: tmp_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1
- Datasets 2.13.0
- Tokenizers 0.13.3
|
tatiana-merz/mbart-large-50-finetuned-sah-to-feat
|
tatiana-merz
| 2023-06-28T13:56:44Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-28T10:18:34Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-large-50-finetuned-sah-to-feat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-50-finetuned-sah-to-feat
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4514
- Bleu: 5.5821
- Gen Len: 199.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 12.3174 | 1.0 | 24 | 6.7012 | 0.0148 | 199.0 |
| 5.2025 | 2.0 | 48 | 3.9396 | 0.0953 | 199.0 |
| 3.6977 | 3.0 | 72 | 2.1764 | 1.573 | 199.0 |
| 1.884 | 4.0 | 96 | 1.3887 | 3.5395 | 199.0 |
| 1.3753 | 5.0 | 120 | 0.9881 | 5.6111 | 199.0 |
| 0.9707 | 6.0 | 144 | 0.7452 | 5.2841 | 199.0 |
| 0.8008 | 7.0 | 168 | 0.6060 | 5.4831 | 199.0 |
| 0.6444 | 8.0 | 192 | 0.5174 | 5.4302 | 199.0 |
| 0.5689 | 9.0 | 216 | 0.4744 | 5.6898 | 199.0 |
| 0.5244 | 10.0 | 240 | 0.4514 | 5.5821 | 199.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mnavas/bertmulti-finetuned-token-reqadjzar
|
mnavas
| 2023-06-28T13:52:18Z | 64 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-28T11:02:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bertmulti-finetuned-token-reqadjzar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertmulti-finetuned-token-reqadjzar
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0215
- Precision: 0.3729
- Recall: 0.4783
- F1: 0.4190
- Accuracy: 0.8899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.71 | 1.0 | 24 | 0.6444 | 0.0 | 0.0 | 0.0 | 0.5695 |
| 0.5158 | 2.0 | 48 | 0.7621 | 0.0 | 0.0 | 0.0 | 0.6544 |
| 0.4702 | 3.0 | 72 | 0.4411 | 0.0 | 0.0 | 0.0 | 0.7961 |
| 0.3326 | 4.0 | 96 | 0.6711 | 0.0 | 0.0 | 0.0 | 0.7482 |
| 0.3134 | 5.0 | 120 | 0.4577 | 0.0 | 0.0 | 0.0 | 0.8160 |
| 0.2036 | 6.0 | 144 | 0.7211 | 0.0266 | 0.1087 | 0.0427 | 0.7232 |
| 0.1918 | 7.0 | 168 | 0.5596 | 0.0342 | 0.1087 | 0.0521 | 0.8284 |
| 0.1388 | 8.0 | 192 | 0.6039 | 0.0539 | 0.1957 | 0.0845 | 0.8242 |
| 0.1525 | 9.0 | 216 | 0.4642 | 0.0743 | 0.2391 | 0.1134 | 0.8436 |
| 0.0965 | 10.0 | 240 | 0.4855 | 0.1241 | 0.3913 | 0.1885 | 0.8758 |
| 0.0757 | 11.0 | 264 | 0.4353 | 0.0966 | 0.3043 | 0.1466 | 0.8543 |
| 0.0689 | 12.0 | 288 | 1.5112 | 0.0963 | 0.2826 | 0.1436 | 0.6792 |
| 0.1298 | 13.0 | 312 | 0.6653 | 0.1746 | 0.4783 | 0.2558 | 0.8430 |
| 0.0656 | 14.0 | 336 | 0.6830 | 0.0935 | 0.2826 | 0.1405 | 0.8443 |
| 0.041 | 15.0 | 360 | 0.6495 | 0.1231 | 0.3478 | 0.1818 | 0.8411 |
| 0.039 | 16.0 | 384 | 0.5259 | 0.1377 | 0.4130 | 0.2065 | 0.8585 |
| 0.031 | 17.0 | 408 | 0.6282 | 0.2054 | 0.5 | 0.2911 | 0.8479 |
| 0.0615 | 18.0 | 432 | 0.6453 | 0.1959 | 0.4130 | 0.2657 | 0.8559 |
| 0.0282 | 19.0 | 456 | 0.7038 | 0.1324 | 0.3913 | 0.1978 | 0.8462 |
| 0.0204 | 20.0 | 480 | 0.5582 | 0.1759 | 0.4130 | 0.2468 | 0.8669 |
| 0.0202 | 21.0 | 504 | 0.9386 | 0.1852 | 0.3261 | 0.2362 | 0.8224 |
| 0.0255 | 22.0 | 528 | 0.8781 | 0.1714 | 0.3913 | 0.2384 | 0.7990 |
| 0.0281 | 23.0 | 552 | 0.6537 | 0.1875 | 0.4565 | 0.2658 | 0.8833 |
| 0.0303 | 24.0 | 576 | 0.6140 | 0.2319 | 0.3478 | 0.2783 | 0.8735 |
| 0.0404 | 25.0 | 600 | 0.8073 | 0.2062 | 0.4348 | 0.2797 | 0.7813 |
| 0.0378 | 26.0 | 624 | 0.7925 | 0.1852 | 0.4348 | 0.2597 | 0.8601 |
| 0.0185 | 27.0 | 648 | 0.6131 | 0.1835 | 0.4348 | 0.2581 | 0.8879 |
| 0.0217 | 28.0 | 672 | 0.8677 | 0.2347 | 0.5 | 0.3194 | 0.8151 |
| 0.0147 | 29.0 | 696 | 0.6513 | 0.2 | 0.3913 | 0.2647 | 0.8622 |
| 0.0228 | 30.0 | 720 | 0.8354 | 0.2787 | 0.3696 | 0.3178 | 0.8490 |
| 0.0298 | 31.0 | 744 | 0.7063 | 0.1858 | 0.4565 | 0.2642 | 0.8858 |
| 0.018 | 32.0 | 768 | 0.9348 | 0.3188 | 0.4783 | 0.3826 | 0.8688 |
| 0.0179 | 33.0 | 792 | 0.7260 | 0.3014 | 0.4783 | 0.3697 | 0.8814 |
| 0.0139 | 34.0 | 816 | 1.6266 | 0.2427 | 0.5435 | 0.3356 | 0.7781 |
| 0.0435 | 35.0 | 840 | 0.5877 | 0.225 | 0.3913 | 0.2857 | 0.8827 |
| 0.0172 | 36.0 | 864 | 0.9602 | 0.1545 | 0.4130 | 0.2249 | 0.8379 |
| 0.02 | 37.0 | 888 | 0.7676 | 0.2473 | 0.5 | 0.3309 | 0.8696 |
| 0.012 | 38.0 | 912 | 0.6857 | 0.2118 | 0.3913 | 0.2748 | 0.8836 |
| 0.0088 | 39.0 | 936 | 0.8894 | 0.2857 | 0.5217 | 0.3692 | 0.8697 |
| 0.0055 | 40.0 | 960 | 0.7700 | 0.2340 | 0.4783 | 0.3143 | 0.8678 |
| 0.014 | 41.0 | 984 | 0.9191 | 0.2692 | 0.4565 | 0.3387 | 0.8396 |
| 0.0061 | 42.0 | 1008 | 0.8042 | 0.2785 | 0.4783 | 0.352 | 0.8753 |
| 0.0059 | 43.0 | 1032 | 1.0644 | 0.1959 | 0.4130 | 0.2657 | 0.8203 |
| 0.0032 | 44.0 | 1056 | 1.1174 | 0.2949 | 0.5 | 0.3710 | 0.8242 |
| 0.0057 | 45.0 | 1080 | 1.3623 | 0.2963 | 0.5217 | 0.3780 | 0.8346 |
| 0.019 | 46.0 | 1104 | 1.0958 | 0.1932 | 0.3696 | 0.2537 | 0.8465 |
| 0.0167 | 47.0 | 1128 | 0.9388 | 0.1848 | 0.3696 | 0.2464 | 0.8355 |
| 0.0131 | 48.0 | 1152 | 1.2771 | 0.1826 | 0.4565 | 0.2609 | 0.7805 |
| 0.0095 | 49.0 | 1176 | 1.0477 | 0.1944 | 0.4565 | 0.2727 | 0.8332 |
| 0.0058 | 50.0 | 1200 | 0.9822 | 0.2941 | 0.5435 | 0.3817 | 0.8407 |
| 0.0056 | 51.0 | 1224 | 1.1512 | 0.2360 | 0.4565 | 0.3111 | 0.8361 |
| 0.0045 | 52.0 | 1248 | 0.8875 | 0.2468 | 0.4130 | 0.3089 | 0.8693 |
| 0.0035 | 53.0 | 1272 | 0.9689 | 0.2346 | 0.4130 | 0.2992 | 0.8551 |
| 0.0066 | 54.0 | 1296 | 0.9921 | 0.2299 | 0.4348 | 0.3008 | 0.8587 |
| 0.0026 | 55.0 | 1320 | 0.8510 | 0.2817 | 0.4348 | 0.3419 | 0.8758 |
| 0.0033 | 56.0 | 1344 | 0.9234 | 0.2115 | 0.4783 | 0.2933 | 0.8436 |
| 0.0125 | 57.0 | 1368 | 1.0792 | 0.2308 | 0.3913 | 0.2903 | 0.8486 |
| 0.0034 | 58.0 | 1392 | 1.1353 | 0.2609 | 0.5217 | 0.3478 | 0.8274 |
| 0.0065 | 59.0 | 1416 | 1.3812 | 0.2738 | 0.5 | 0.3538 | 0.7993 |
| 0.0082 | 60.0 | 1440 | 1.0929 | 0.2233 | 0.5 | 0.3087 | 0.8429 |
| 0.0202 | 61.0 | 1464 | 0.9371 | 0.1709 | 0.4348 | 0.2454 | 0.8399 |
| 0.0063 | 62.0 | 1488 | 0.6318 | 0.2099 | 0.3696 | 0.2677 | 0.8543 |
| 0.0047 | 63.0 | 1512 | 0.8257 | 0.2018 | 0.5 | 0.2875 | 0.8514 |
| 0.0036 | 64.0 | 1536 | 0.8545 | 0.1963 | 0.4565 | 0.2745 | 0.8484 |
| 0.0027 | 65.0 | 1560 | 0.8684 | 0.2421 | 0.5 | 0.3262 | 0.8539 |
| 0.002 | 66.0 | 1584 | 0.8609 | 0.25 | 0.5 | 0.3333 | 0.8630 |
| 0.0022 | 67.0 | 1608 | 0.7618 | 0.2347 | 0.5 | 0.3194 | 0.8804 |
| 0.0026 | 68.0 | 1632 | 0.8460 | 0.23 | 0.5 | 0.3151 | 0.8654 |
| 0.0019 | 69.0 | 1656 | 0.7437 | 0.2857 | 0.5217 | 0.3692 | 0.8933 |
| 0.0027 | 70.0 | 1680 | 0.7911 | 0.2727 | 0.4565 | 0.3415 | 0.8898 |
| 0.0025 | 71.0 | 1704 | 0.8172 | 0.3333 | 0.4783 | 0.3929 | 0.8880 |
| 0.0037 | 72.0 | 1728 | 0.7807 | 0.2680 | 0.5652 | 0.3636 | 0.8873 |
| 0.0032 | 73.0 | 1752 | 0.9164 | 0.2683 | 0.4783 | 0.3438 | 0.8760 |
| 0.0092 | 74.0 | 1776 | 0.6410 | 0.2976 | 0.5435 | 0.3846 | 0.8836 |
| 0.0029 | 75.0 | 1800 | 0.7780 | 0.2857 | 0.5217 | 0.3692 | 0.8854 |
| 0.0017 | 76.0 | 1824 | 0.9096 | 0.2683 | 0.4783 | 0.3438 | 0.8656 |
| 0.0017 | 77.0 | 1848 | 0.8843 | 0.2911 | 0.5 | 0.368 | 0.8773 |
| 0.0019 | 78.0 | 1872 | 0.7888 | 0.2410 | 0.4348 | 0.3101 | 0.8613 |
| 0.0032 | 79.0 | 1896 | 0.9426 | 0.2241 | 0.5652 | 0.3210 | 0.8490 |
| 0.0019 | 80.0 | 1920 | 0.9566 | 0.25 | 0.3913 | 0.3051 | 0.8708 |
| 0.0017 | 81.0 | 1944 | 1.0507 | 0.2588 | 0.4783 | 0.3359 | 0.8669 |
| 0.0015 | 82.0 | 1968 | 1.1118 | 0.2174 | 0.4348 | 0.2899 | 0.8614 |
| 0.0014 | 83.0 | 1992 | 1.1422 | 0.2299 | 0.4348 | 0.3008 | 0.8548 |
| 0.0015 | 84.0 | 2016 | 1.1422 | 0.2716 | 0.4783 | 0.3465 | 0.8556 |
| 0.0013 | 85.0 | 2040 | 1.0874 | 0.2371 | 0.5 | 0.3217 | 0.8557 |
| 0.0015 | 86.0 | 2064 | 1.0420 | 0.2277 | 0.5 | 0.3129 | 0.8624 |
| 0.0013 | 87.0 | 2088 | 1.0851 | 0.2418 | 0.4783 | 0.3212 | 0.8579 |
| 0.0015 | 88.0 | 2112 | 1.1249 | 0.2556 | 0.5 | 0.3382 | 0.8622 |
| 0.0015 | 89.0 | 2136 | 1.0589 | 0.2667 | 0.5217 | 0.3529 | 0.8617 |
| 0.0014 | 90.0 | 2160 | 1.0879 | 0.2674 | 0.5 | 0.3485 | 0.8497 |
| 0.0019 | 91.0 | 2184 | 1.0425 | 0.2651 | 0.4783 | 0.3411 | 0.8551 |
| 0.0015 | 92.0 | 2208 | 1.0137 | 0.2716 | 0.4783 | 0.3465 | 0.8579 |
| 0.0015 | 93.0 | 2232 | 1.0084 | 0.2716 | 0.4783 | 0.3465 | 0.8619 |
| 0.0015 | 94.0 | 2256 | 1.0231 | 0.2727 | 0.5217 | 0.3582 | 0.8529 |
| 0.0014 | 95.0 | 2280 | 1.1031 | 0.3067 | 0.5 | 0.3802 | 0.8522 |
| 0.0014 | 96.0 | 2304 | 1.0001 | 0.2796 | 0.5652 | 0.3741 | 0.8642 |
| 0.0012 | 97.0 | 2328 | 1.0274 | 0.3253 | 0.5870 | 0.4186 | 0.8683 |
| 0.0015 | 98.0 | 2352 | 1.1420 | 0.3559 | 0.4565 | 0.4000 | 0.8579 |
| 0.0154 | 99.0 | 2376 | 0.8248 | 0.4706 | 0.5217 | 0.4948 | 0.8894 |
| 0.0041 | 100.0 | 2400 | 0.8580 | 0.2892 | 0.5217 | 0.3721 | 0.8768 |
| 0.0046 | 101.0 | 2424 | 1.0790 | 0.1792 | 0.4130 | 0.25 | 0.8623 |
| 0.0021 | 102.0 | 2448 | 1.0016 | 0.25 | 0.4348 | 0.3175 | 0.8766 |
| 0.0028 | 103.0 | 2472 | 0.8267 | 0.2899 | 0.4348 | 0.3478 | 0.8907 |
| 0.0026 | 104.0 | 2496 | 1.1740 | 0.2212 | 0.5 | 0.3067 | 0.8511 |
| 0.0018 | 105.0 | 2520 | 1.2264 | 0.1759 | 0.4130 | 0.2468 | 0.8389 |
| 0.0017 | 106.0 | 2544 | 1.1772 | 0.2451 | 0.5435 | 0.3378 | 0.8468 |
| 0.0014 | 107.0 | 2568 | 1.2155 | 0.2556 | 0.5 | 0.3382 | 0.8386 |
| 0.0018 | 108.0 | 2592 | 1.1990 | 0.2558 | 0.4783 | 0.3333 | 0.8411 |
| 0.0022 | 109.0 | 2616 | 1.0769 | 0.3425 | 0.5435 | 0.4202 | 0.8679 |
| 0.0016 | 110.0 | 2640 | 1.0793 | 0.3538 | 0.5 | 0.4144 | 0.8629 |
| 0.0019 | 111.0 | 2664 | 0.8828 | 0.2680 | 0.5652 | 0.3636 | 0.8823 |
| 0.0014 | 112.0 | 2688 | 1.0073 | 0.3548 | 0.4783 | 0.4074 | 0.8810 |
| 0.0016 | 113.0 | 2712 | 0.9562 | 0.3667 | 0.4783 | 0.4151 | 0.8827 |
| 0.0014 | 114.0 | 2736 | 0.9590 | 0.3438 | 0.4783 | 0.4 | 0.8802 |
| 0.0014 | 115.0 | 2760 | 1.0293 | 0.4 | 0.5217 | 0.4528 | 0.8814 |
| 0.0014 | 116.0 | 2784 | 1.0419 | 0.4068 | 0.5217 | 0.4571 | 0.8804 |
| 0.0012 | 117.0 | 2808 | 1.0451 | 0.4138 | 0.5217 | 0.4615 | 0.8805 |
| 0.005 | 118.0 | 2832 | 1.0514 | 0.4068 | 0.5217 | 0.4571 | 0.8803 |
| 0.0019 | 119.0 | 2856 | 1.0440 | 0.4068 | 0.5217 | 0.4571 | 0.8805 |
| 0.0015 | 120.0 | 2880 | 1.0782 | 0.4 | 0.5217 | 0.4528 | 0.8768 |
| 0.0015 | 121.0 | 2904 | 1.0736 | 0.4211 | 0.5217 | 0.4660 | 0.8765 |
| 0.0014 | 122.0 | 2928 | 1.0565 | 0.3934 | 0.5217 | 0.4486 | 0.8776 |
| 0.0013 | 123.0 | 2952 | 1.0496 | 0.4444 | 0.5217 | 0.48 | 0.8814 |
| 0.0012 | 124.0 | 2976 | 1.0805 | 0.4286 | 0.5217 | 0.4706 | 0.8805 |
| 0.0012 | 125.0 | 3000 | 1.1119 | 0.4211 | 0.5217 | 0.4660 | 0.8809 |
| 0.0013 | 126.0 | 3024 | 1.0880 | 0.4528 | 0.5217 | 0.4848 | 0.8812 |
| 0.0014 | 127.0 | 3048 | 1.0198 | 0.3729 | 0.4783 | 0.4190 | 0.8796 |
| 0.0013 | 128.0 | 3072 | 1.0028 | 0.4 | 0.5217 | 0.4528 | 0.8790 |
| 0.0014 | 129.0 | 3096 | 1.0229 | 0.3529 | 0.5217 | 0.4211 | 0.8835 |
| 0.0013 | 130.0 | 3120 | 1.0440 | 0.3380 | 0.5217 | 0.4103 | 0.8747 |
| 0.0013 | 131.0 | 3144 | 1.1109 | 0.4615 | 0.5217 | 0.4898 | 0.8781 |
| 0.0012 | 132.0 | 3168 | 1.1082 | 0.4706 | 0.5217 | 0.4948 | 0.8812 |
| 0.0013 | 133.0 | 3192 | 1.1031 | 0.4444 | 0.5217 | 0.48 | 0.8806 |
| 0.0011 | 134.0 | 3216 | 1.1345 | 0.3529 | 0.5217 | 0.4211 | 0.8713 |
| 0.0012 | 135.0 | 3240 | 1.1631 | 0.3485 | 0.5 | 0.4107 | 0.8716 |
| 0.0012 | 136.0 | 3264 | 1.1461 | 0.3429 | 0.5217 | 0.4138 | 0.8708 |
| 0.0012 | 137.0 | 3288 | 1.1592 | 0.4138 | 0.5217 | 0.4615 | 0.8683 |
| 0.0012 | 138.0 | 3312 | 1.0969 | 0.4138 | 0.5217 | 0.4615 | 0.8754 |
| 0.0013 | 139.0 | 3336 | 1.0575 | 0.3429 | 0.5217 | 0.4138 | 0.8787 |
| 0.0013 | 140.0 | 3360 | 1.0560 | 0.3636 | 0.5217 | 0.4286 | 0.8826 |
| 0.0013 | 141.0 | 3384 | 1.0525 | 0.3380 | 0.5217 | 0.4103 | 0.8796 |
| 0.0011 | 142.0 | 3408 | 1.0548 | 0.3380 | 0.5217 | 0.4103 | 0.8792 |
| 0.0013 | 143.0 | 3432 | 1.0593 | 0.3478 | 0.5217 | 0.4174 | 0.8802 |
| 0.0012 | 144.0 | 3456 | 1.0402 | 0.375 | 0.5217 | 0.4364 | 0.8827 |
| 0.0011 | 145.0 | 3480 | 1.0401 | 0.375 | 0.5217 | 0.4364 | 0.8828 |
| 0.0012 | 146.0 | 3504 | 1.0319 | 0.3810 | 0.5217 | 0.4404 | 0.8840 |
| 0.0012 | 147.0 | 3528 | 1.0328 | 0.3692 | 0.5217 | 0.4324 | 0.8838 |
| 0.0012 | 148.0 | 3552 | 1.1021 | 0.3433 | 0.5 | 0.4071 | 0.8730 |
| 0.0012 | 149.0 | 3576 | 1.0402 | 0.3485 | 0.5 | 0.4107 | 0.8817 |
| 0.0013 | 150.0 | 3600 | 0.9619 | 0.3086 | 0.5435 | 0.3937 | 0.8883 |
| 0.0014 | 151.0 | 3624 | 0.9578 | 0.3382 | 0.5 | 0.4035 | 0.8843 |
| 0.0012 | 152.0 | 3648 | 1.0303 | 0.3692 | 0.5217 | 0.4324 | 0.8830 |
| 0.0013 | 153.0 | 3672 | 1.0571 | 0.3934 | 0.5217 | 0.4486 | 0.8812 |
| 0.0012 | 154.0 | 3696 | 1.0793 | 0.3692 | 0.5217 | 0.4324 | 0.8812 |
| 0.0011 | 155.0 | 3720 | 1.0766 | 0.375 | 0.5217 | 0.4364 | 0.8803 |
| 0.0011 | 156.0 | 3744 | 1.0824 | 0.3934 | 0.5217 | 0.4486 | 0.8810 |
| 0.0012 | 157.0 | 3768 | 1.0841 | 0.4 | 0.5217 | 0.4528 | 0.8810 |
| 0.0011 | 158.0 | 3792 | 1.0866 | 0.4068 | 0.5217 | 0.4571 | 0.8812 |
| 0.0012 | 159.0 | 3816 | 1.1016 | 0.4 | 0.5217 | 0.4528 | 0.8808 |
| 0.0011 | 160.0 | 3840 | 1.1114 | 0.3810 | 0.5217 | 0.4404 | 0.8793 |
| 0.0013 | 161.0 | 3864 | 1.1427 | 0.2892 | 0.5217 | 0.3721 | 0.8577 |
| 0.0011 | 162.0 | 3888 | 1.0292 | 0.3582 | 0.5217 | 0.4248 | 0.8875 |
| 0.0012 | 163.0 | 3912 | 0.9894 | 0.375 | 0.5217 | 0.4364 | 0.8872 |
| 0.0011 | 164.0 | 3936 | 0.9877 | 0.3636 | 0.5217 | 0.4286 | 0.8870 |
| 0.0011 | 165.0 | 3960 | 0.9887 | 0.3692 | 0.5217 | 0.4324 | 0.8890 |
| 0.0012 | 166.0 | 3984 | 0.9874 | 0.3243 | 0.5217 | 0.4 | 0.8871 |
| 0.0011 | 167.0 | 4008 | 0.9992 | 0.3636 | 0.5217 | 0.4286 | 0.8896 |
| 0.0012 | 168.0 | 4032 | 0.9835 | 0.3692 | 0.5217 | 0.4324 | 0.8903 |
| 0.0011 | 169.0 | 4056 | 0.9918 | 0.3284 | 0.4783 | 0.3894 | 0.8910 |
| 0.0011 | 170.0 | 4080 | 0.9960 | 0.3438 | 0.4783 | 0.4 | 0.8914 |
| 0.0011 | 171.0 | 4104 | 1.0065 | 0.3729 | 0.4783 | 0.4190 | 0.8915 |
| 0.0012 | 172.0 | 4128 | 1.0266 | 0.3929 | 0.4783 | 0.4314 | 0.8908 |
| 0.0011 | 173.0 | 4152 | 1.0318 | 0.3929 | 0.4783 | 0.4314 | 0.8908 |
| 0.0011 | 174.0 | 4176 | 1.0329 | 0.3793 | 0.4783 | 0.4231 | 0.8908 |
| 0.0012 | 175.0 | 4200 | 1.0254 | 0.3860 | 0.4783 | 0.4272 | 0.8910 |
| 0.0011 | 176.0 | 4224 | 1.0183 | 0.4 | 0.4783 | 0.4356 | 0.8912 |
| 0.0012 | 177.0 | 4248 | 1.0205 | 0.3860 | 0.4783 | 0.4272 | 0.8909 |
| 0.0011 | 178.0 | 4272 | 1.0232 | 0.3793 | 0.4783 | 0.4231 | 0.8908 |
| 0.0011 | 179.0 | 4296 | 1.0246 | 0.3860 | 0.4783 | 0.4272 | 0.8908 |
| 0.0012 | 180.0 | 4320 | 1.0245 | 0.3793 | 0.4783 | 0.4231 | 0.8905 |
| 0.0012 | 181.0 | 4344 | 1.0223 | 0.375 | 0.4565 | 0.4118 | 0.8902 |
| 0.0011 | 182.0 | 4368 | 1.0169 | 0.3929 | 0.4783 | 0.4314 | 0.8894 |
| 0.0011 | 183.0 | 4392 | 1.0172 | 0.3929 | 0.4783 | 0.4314 | 0.8893 |
| 0.0012 | 184.0 | 4416 | 1.0147 | 0.3860 | 0.4783 | 0.4272 | 0.8894 |
| 0.0012 | 185.0 | 4440 | 1.0145 | 0.3860 | 0.4783 | 0.4272 | 0.8894 |
| 0.0011 | 186.0 | 4464 | 1.0128 | 0.3729 | 0.4783 | 0.4190 | 0.8897 |
| 0.0011 | 187.0 | 4488 | 1.0146 | 0.3729 | 0.4783 | 0.4190 | 0.8897 |
| 0.0011 | 188.0 | 4512 | 1.0160 | 0.3729 | 0.4783 | 0.4190 | 0.8897 |
| 0.0011 | 189.0 | 4536 | 1.0178 | 0.3729 | 0.4783 | 0.4190 | 0.8898 |
| 0.0011 | 190.0 | 4560 | 1.0185 | 0.3729 | 0.4783 | 0.4190 | 0.8898 |
| 0.0011 | 191.0 | 4584 | 1.0171 | 0.3793 | 0.4783 | 0.4231 | 0.8899 |
| 0.0011 | 192.0 | 4608 | 1.0179 | 0.3729 | 0.4783 | 0.4190 | 0.8898 |
| 0.0011 | 193.0 | 4632 | 1.0196 | 0.3793 | 0.4783 | 0.4231 | 0.8899 |
| 0.0012 | 194.0 | 4656 | 1.0188 | 0.3793 | 0.4783 | 0.4231 | 0.8899 |
| 0.0011 | 195.0 | 4680 | 1.0185 | 0.3793 | 0.4783 | 0.4231 | 0.8899 |
| 0.0011 | 196.0 | 4704 | 1.0194 | 0.3860 | 0.4783 | 0.4272 | 0.8898 |
| 0.0011 | 197.0 | 4728 | 1.0206 | 0.3729 | 0.4783 | 0.4190 | 0.8900 |
| 0.0011 | 198.0 | 4752 | 1.0207 | 0.3729 | 0.4783 | 0.4190 | 0.8900 |
| 0.0012 | 199.0 | 4776 | 1.0215 | 0.3729 | 0.4783 | 0.4190 | 0.8899 |
| 0.0011 | 200.0 | 4800 | 1.0215 | 0.3729 | 0.4783 | 0.4190 | 0.8899 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
mln-wave/my-pet-dog-xzg
|
mln-wave
| 2023-06-28T13:45:00Z | 17 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-28T13:34:16Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-XZG Dreambooth model trained by mln-wave following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19738133fad
Sample pictures of this concept:

|
atharputra/ivanaxx1
|
atharputra
| 2023-06-28T13:41:28Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-28T13:39:21Z |
---
license: creativeml-openrail-m
---
|
ade93/my_awesome_qa_model
|
ade93
| 2023-06-28T13:40:06Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-26T08:49:44Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ade93/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ade93/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: -0.7813
- Validation Loss: -1.7044
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 20, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| -0.7813 | -1.7044 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ryo1443/2ch_rinna_ppo_1k
|
ryo1443
| 2023-06-28T13:37:24Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-28T13:36:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
robrecht/space-invader-v1
|
robrecht
| 2023-06-28T13:28:25Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T13:27:56Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 29.00 +/- 64.30
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga robrecht -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga robrecht -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga robrecht
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
sleepynlp/q-Taxi-v3-v1-leo
|
sleepynlp
| 2023-06-28T13:26:47Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T13:26:43Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-v1-leo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="sleepynlp/q-Taxi-v3-v1-leo", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dev-senolys/camembert_base_finetunned_one_thema_balanced_6_epochs
|
dev-senolys
| 2023-06-28T13:22:20Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"camembert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T11:52:40Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: camembert_base_finetunned_one_thema_balanced_6_epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert_base_finetunned_one_thema_balanced_6_epochs
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 116 | 2.3397 |
| No log | 2.0 | 232 | 2.0559 |
| No log | 3.0 | 348 | 1.8209 |
| No log | 4.0 | 464 | 1.7804 |
| 1.9817 | 5.0 | 580 | 1.7323 |
| 1.9817 | 6.0 | 696 | 1.7237 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Shrishml/dolly_lora3b
|
Shrishml
| 2023-06-28T13:09:28Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-28T07:01:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
hassansoliman/falcon-40b-qlora-utterance-adaptations_v6
|
hassansoliman
| 2023-06-28T12:55:21Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-27T12:07:40Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
xzuyn/GPT2-RPGPT-8.48M-EPOCH1-GGML
|
xzuyn
| 2023-06-28T12:54:36Z | 0 | 0 | null |
[
"gpt2",
"gpt-2",
"region:us"
] | null | 2023-06-28T12:49:29Z |
---
tags:
- gpt2
- gpt-2
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/xzuyn/GPT2-RPGPT-8.48M
|
YakovElm/Qt_20_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T12:49:09Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T12:48:33Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt_20_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt_20_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0003
- Train Accuracy: 1.0
- Validation Loss: 0.3606
- Validation Accuracy: 0.9586
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0366 | 0.9886 | 0.2979 | 0.9586 | 0 |
| 0.0006 | 1.0 | 0.3361 | 0.9586 | 1 |
| 0.0003 | 1.0 | 0.3606 | 0.9586 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jensvw/taxi-v3
|
jensvw
| 2023-06-28T12:47:36Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T12:47:32Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jensvw/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
robrecht/taxi_v1
|
robrecht
| 2023-06-28T12:44:10Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T12:36:05Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi_v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="robrecht/taxi_v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
lordpepe/ppo-LunarLander-v2
|
lordpepe
| 2023-06-28T12:36:31Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T12:36:07Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.07 +/- 15.38
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
robrecht/q-FrozenLake-v1-4x4-noSlippery
|
robrecht
| 2023-06-28T12:32:46Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T12:32:43Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="robrecht/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.