modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Ayoola/pytorch_model
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-03-15T14:30:01Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- mouss/autotrain-data-bikes-ag
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 1.381064904462668
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 41243106351
- CO2 Emissions (in grams): 1.3811
## Validation Metrics
- Loss: 0.161
- Accuracy: 0.936
- Macro F1: 0.936
- Micro F1: 0.936
- Weighted F1: 0.936
- Macro Precision: 0.936
- Micro Precision: 0.936
- Weighted Precision: 0.936
- Macro Recall: 0.936
- Micro Recall: 0.936
- Weighted Recall: 0.936
|
Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6-e18
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | null |
---
language: pl
tags:
- T5
- lemmatization
license: apache-2.0
---
# PoLemma Small
PoLemma models are intended for lemmatization of named entities and multi-word expressions in the Polish language.
They were fine-tuned from the allegro/plT5 models, e.g.: [allegro/plt5-small](https://huggingface.co/allegro/plt5-small).
## Usage
Sample usage:
```
from transformers import pipeline
pipe = pipeline(task="text2text-generation", model="amu-cai/polemma-small", tokenizer="amu-cai/polemma-small")
hyp = [res['generated_text'] for res in pipe(["federalnego urzędu statystycznego"], clean_up_tokenization_spaces=True, num_beams=5)][0]
```
## Evaluation results
Lemmatization Exact Match was computed on the SlavNER 2021 test set.
| Model | Exact Match ||
| :------ | ------: | ------: |
| [polemma-large](https://huggingface.co/amu-cai/polemma-large) | 92.61 |
| [polemma-base](https://huggingface.co/amu-cai/polemma-base) | 91.34 |
| [polemma-small](https://huggingface.co/amu-cai/polemma-small)| 88.46 |
## Citation
If you use the model, please cite the following paper:
```
@inproceedings{palka-nowakowski-2023-exploring,
title = "Exploring the Use of Foundation Models for Named Entity Recognition and Lemmatization Tasks in {S}lavic Languages",
author = "Pa{\l}ka, Gabriela and
Nowakowski, Artur",
booktitle = "Proceedings of the 9th Workshop on Slavic Natural Language Processing 2023 (SlavicNLP 2023)",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bsnlp-1.19",
pages = "165--171",
abstract = "This paper describes Adam Mickiewicz University{'}s (AMU) solution for the 4th Shared Task on SlavNER. The task involves the identification, categorization, and lemmatization of named entities in Slavic languages. Our approach involved exploring the use of foundation models for these tasks. In particular, we used models based on the popular BERT and T5 model architectures. Additionally, we used external datasets to further improve the quality of our models. Our solution obtained promising results, achieving high metrics scores in both tasks. We describe our approach and the results of our experiments in detail, showing that the method is effective for NER and lemmatization in Slavic languages. Additionally, our models for lemmatization will be available at: https://huggingface.co/amu-cai.",
}
```
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Ayran/DialoGPT-small-gandalf
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: pixelcoper-v1_try2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 11.80 +/- 11.74
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Ayumi/Jovana
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: UchihaMadara/thesis-pretrained-3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# UchihaMadara/thesis-pretrained-3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.1729
- Validation Loss: 3.0741
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 0.001, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.001, 'decay_steps': -853, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.7385 | 3.1931 | 0 |
| 3.1729 | 3.0741 | 1 |
### Framework versions
- Transformers 4.27.0
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
AyushPJ/ai-club-inductions-21-nlp-ELECTRA-base-squad
|
[
"pytorch",
"electra",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"ElectraForQuestionAnswering"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | null |
---
language:
- uz
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: whisper-small-sv-test2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-sv-test2
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
|
AyushPJ/ai-club-inductions-21-nlp-distilBERT
|
[
"pytorch",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
language: pl
tags:
- T5
- lemmatization
license: apache-2.0
---
# PoLemma Large
PoLemma models are intended for lemmatization of named entities and multi-word expressions in the Polish language.
They were fine-tuned from the allegro/plT5 models, e.g.: [allegro/plt5-large](https://huggingface.co/allegro/plt5-large).
## Usage
Sample usage:
```
from transformers import pipeline
pipe = pipeline(task="text2text-generation", model="amu-cai/polemma-large", tokenizer="amu-cai/polemma-large")
hyp = [res['generated_text'] for res in pipe(["federalnego urzędu statystycznego"], clean_up_tokenization_spaces=True, num_beams=5)][0]
```
## Evaluation results
Lemmatization Exact Match was computed on the SlavNER 2021 test set.
| Model | Exact Match ||
| :------ | ------: | ------: |
| [polemma-large](https://huggingface.co/amu-cai/polemma-large) | 92.61 |
| [polemma-base](https://huggingface.co/amu-cai/polemma-base) | 91.34 |
| [polemma-small](https://huggingface.co/amu-cai/polemma-small)| 88.46 |
## Citation
If you use the model, please cite the following paper:
```
@inproceedings{palka-nowakowski-2023-exploring,
title = "Exploring the Use of Foundation Models for Named Entity Recognition and Lemmatization Tasks in {S}lavic Languages",
author = "Pa{\l}ka, Gabriela and
Nowakowski, Artur",
booktitle = "Proceedings of the 9th Workshop on Slavic Natural Language Processing 2023 (SlavicNLP 2023)",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bsnlp-1.19",
pages = "165--171",
abstract = "This paper describes Adam Mickiewicz University{'}s (AMU) solution for the 4th Shared Task on SlavNER. The task involves the identification, categorization, and lemmatization of named entities in Slavic languages. Our approach involved exploring the use of foundation models for these tasks. In particular, we used models based on the popular BERT and T5 model architectures. Additionally, we used external datasets to further improve the quality of our models. Our solution obtained promising results, achieving high metrics scores in both tasks. We describe our approach and the results of our experiments in detail, showing that the method is effective for NER and lemmatization in Slavic languages. Additionally, our models for lemmatization will be available at: https://huggingface.co/amu-cai.",
}
```
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
AyushPJ/ai-club-inductions-21-nlp-roBERTa
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
pipeline_tag: text-to-image
---
|
AyushPJ/test-squad-trained-finetuned-squad
|
[
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Azaghast/DistilBART-SCP-ParaSummarization
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"BartForConditionalGeneration"
],
"model_type": "bart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 142,
"min_length": 56,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2_reward_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_reward_model
This model is a fine-tuned version of [gavin124/gpt2-finetuned-cnn-summarization-v2](https://huggingface.co/gavin124/gpt2-finetuned-cnn-summarization-v2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Azaghast/GPT2-SCP-Descriptions
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | 2023-03-15T14:56:20Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="quilaquedi/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Azuris/DialoGPT-medium-senorita
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 14 | null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.47 +/- 0.54
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BJTK2/model_name
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
BSC-LT/roberta-base-bne-sqac
|
[
"pytorch",
"roberta",
"question-answering",
"es",
"dataset:BSC-TeMU/SQAC",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"qa",
"question answering",
"license:apache-2.0",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | 2023-03-15T15:34:45Z |
---
language:
- pl
- cs
- ru
tags:
- mT5
- lemmatization
license: apache-2.0
---
# SlavLemma Small
SlavLemma models are intended for lemmatization of named entities and multi-word expressions in Polish, Czech and Russian languages.
They were fine-tuned from the google/mT5 models, e.g.: [google/mt5-small](https://huggingface.co/google/mt5-small).
## Usage
When using the model, prepend one of the language tokens (`>>pl<<`, `>>cs<<`, `>>ru<<`) to the input, based on the language of the phrase you want to lemmatize.
Sample usage:
```
from transformers import pipeline
pipe = pipeline(task="text2text-generation", model="amu-cai/slavlemma-small", tokenizer="amu-cai/slavlemma-small")
hyp = [res['generated_text'] for res in pipe([">>pl<< federalnego urzędu statystycznego"], clean_up_tokenization_spaces=True, num_beams=5)][0]
```
## Evaluation results
Lemmatization Exact Match was computed on the SlavNER 2021 test sets (COVID-19 and USA 2020 Elections).
COVID-19:
| Model | pl | cs | ru |
| :------ | ------: | ------: | ------: |
| [slavlemma-large](https://huggingface.co/amu-cai/slavlemma-large) | 93.76 | 89.80 | 77.30
| [slavlemma-base](https://huggingface.co/amu-cai/slavlemma-base) | 91.00 |86.29| 76.10
| [slavlemma-small](https://huggingface.co/amu-cai/slavlemma-small)| 86.80 |80.98| 73.83
USA 2020 Elections:
| Model | pl | cs | ru |
| :------ | ------: | ------: | ------: |
| [slavlemma-large](https://huggingface.co/amu-cai/slavlemma-large) | 89.12 | 87.27| 82.50
| [slavlemma-base](https://huggingface.co/amu-cai/slavlemma-base) | 84.19 |81.97| 80.27
| [slavlemma-small](https://huggingface.co/amu-cai/slavlemma-small)| 78.85 |75.86| 76.18
## Citation
If you use the model, please cite the following paper:
```
@inproceedings{palka-nowakowski-2023-exploring,
title = "Exploring the Use of Foundation Models for Named Entity Recognition and Lemmatization Tasks in {S}lavic Languages",
author = "Pa{\l}ka, Gabriela and
Nowakowski, Artur",
booktitle = "Proceedings of the 9th Workshop on Slavic Natural Language Processing 2023 (SlavicNLP 2023)",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bsnlp-1.19",
pages = "165--171",
abstract = "This paper describes Adam Mickiewicz University{'}s (AMU) solution for the 4th Shared Task on SlavNER. The task involves the identification, categorization, and lemmatization of named entities in Slavic languages. Our approach involved exploring the use of foundation models for these tasks. In particular, we used models based on the popular BERT and T5 model architectures. Additionally, we used external datasets to further improve the quality of our models. Our solution obtained promising results, achieving high metrics scores in both tasks. We describe our approach and the results of our experiments in detail, showing that the method is effective for NER and lemmatization in Slavic languages. Additionally, our models for lemmatization will be available at: https://huggingface.co/amu-cai.",
}
```
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
BSC-LT/roberta-large-bne-sqac
|
[
"pytorch",
"roberta",
"question-answering",
"es",
"dataset:BSC-TeMU/SQAC",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"qa",
"question answering",
"license:apache-2.0",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 15 | null |
# Vocabulary Trimmed [lmqg/mt5-small-koquad-qa](https://huggingface.co/lmqg/mt5-small-koquad-qa): `vocabtrimmer/mt5-small-koquad-qa-trimmed-ko-5000`
This model is a trimmed version of [lmqg/mt5-small-koquad-qa](https://huggingface.co/lmqg/mt5-small-koquad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-koquad-qa | vocabtrimmer/mt5-small-koquad-qa-trimmed-ko-5000 |
|:---------------------------|:---------------------------|:---------------------------------------------------|
| parameter_size_full | 300,165,504 | 49,184,128 |
| parameter_size_embedding | 256,103,424 | 5,122,048 |
| vocab_size | 250,101 | 5,002 |
| compression_rate_full | 100.0 | 16.39 |
| compression_rate_embedding | 100.0 | 2.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ko | vocabtrimmer/mc4_validation | text | ko | validation | 5000 | 2 |
|
BSen/wav2vec2-base-timit-demo-colab
|
[
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
# Vocabulary Trimmed [lmqg/mt5-small-frquad-qa](https://huggingface.co/lmqg/mt5-small-frquad-qa): `vocabtrimmer/mt5-small-frquad-qa-trimmed-fr-5000`
This model is a trimmed version of [lmqg/mt5-small-frquad-qa](https://huggingface.co/lmqg/mt5-small-frquad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-frquad-qa | vocabtrimmer/mt5-small-frquad-qa-trimmed-fr-5000 |
|:---------------------------|:---------------------------|:---------------------------------------------------|
| parameter_size_full | 300,165,504 | 49,185,152 |
| parameter_size_embedding | 256,103,424 | 5,123,072 |
| vocab_size | 250,101 | 5,003 |
| compression_rate_full | 100.0 | 16.39 |
| compression_rate_embedding | 100.0 | 2.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| fr | vocabtrimmer/mc4_validation | text | fr | validation | 5000 | 2 |
|
Bagus/SER-LSSED
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
# Vocabulary Trimmed [lmqg/mt5-small-jaquad-qa](https://huggingface.co/lmqg/mt5-small-jaquad-qa): `vocabtrimmer/mt5-small-jaquad-qa-trimmed-ja-10000`
This model is a trimmed version of [lmqg/mt5-small-jaquad-qa](https://huggingface.co/lmqg/mt5-small-jaquad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-jaquad-qa | vocabtrimmer/mt5-small-jaquad-qa-trimmed-ja-10000 |
|:---------------------------|:---------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 54,304,128 |
| parameter_size_embedding | 256,103,424 | 10,242,048 |
| vocab_size | 250,101 | 10,002 |
| compression_rate_full | 100.0 | 18.09 |
| compression_rate_embedding | 100.0 | 4.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ja | vocabtrimmer/mc4_validation | text | ja | validation | 10000 | 2 |
|
Bagus/wav2vec2-xlsr-japanese-speech-emotion-recognition
|
[
"pytorch",
"wav2vec2",
"audio-classification",
"ja",
"dataset:jtes",
"transformers",
"audio",
"speech",
"speech-emotion-recognition",
"has_space"
] |
audio-classification
|
{
"architectures": [
"HubertForSequenceClassification"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 26 | null |
# Vocabulary Trimmed [lmqg/mt5-small-frquad-qa](https://huggingface.co/lmqg/mt5-small-frquad-qa): `vocabtrimmer/mt5-small-frquad-qa-trimmed-fr-10000`
This model is a trimmed version of [lmqg/mt5-small-frquad-qa](https://huggingface.co/lmqg/mt5-small-frquad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-frquad-qa | vocabtrimmer/mt5-small-frquad-qa-trimmed-fr-10000 |
|:---------------------------|:---------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 54,304,128 |
| parameter_size_embedding | 256,103,424 | 10,242,048 |
| vocab_size | 250,101 | 10,002 |
| compression_rate_full | 100.0 | 18.09 |
| compression_rate_embedding | 100.0 | 4.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| fr | vocabtrimmer/mc4_validation | text | fr | validation | 10000 | 2 |
|
BatuhanYilmaz/bert-finetuned-ner
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -117.32 +/- 88.60
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'mshibatatt/cleanrlppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
BatuhanYilmaz/bert-finetuned-nerxD
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language:
- ar
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small ar - Mohammed Nasri
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: ar
split: test
args: 'config: ar, split: test'
metrics:
- name: Wer
type: wer
value: 39.30217610871362
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small ar - Mohammed Nasri
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2667
- Wer: 39.3022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2262 | 0.05 | 1000 | 0.3206 | 42.6903 |
| 0.2 | 0.1 | 2000 | 0.3067 | 42.8354 |
| 0.1944 | 0.16 | 3000 | 0.2863 | 40.6648 |
| 0.1785 | 0.21 | 4000 | 0.2736 | 39.4675 |
| 0.1641 | 0.26 | 5000 | 0.2667 | 39.3022 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1
- Datasets 2.10.1
- Tokenizers 0.13.2
|
BatuhanYilmaz/code-search-net-tokenizer1
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language:
- bn
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
# widget:
# - example_title: Librispeech sample 1
# src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
# - example_title: Librispeech sample 2
# src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: whisper-small-bn
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: bn
split: test
args:
language: bn
metrics:
- name: Test WER
type: wer
value: 35.14
pipeline_tag: automatic-speech-recognition
license: apache-2.0
---
# Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
# Usage
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens
are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order:
1. The transcription always starts with the `<|startoftranscript|>` token
2. The second token is the language token (e.g. `<|en|>` for English)
3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation
4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction
Thus, a typical sequence of context tokens might look as follows:
```
<|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|>
```
Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps.
These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at
each position. This allows one to control the output language and task for the Whisper model. If they are un-forced,
the Whisper model will automatically predict the output langauge and task itself.
The context tokens can be set accordingly:
```python
model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe")
```
Which forces the model to predict in English under the task of speech recognition.
## Training Data
Common Voice 11.0 Bengali Train
OpenSLR 53 Bengali Train
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
BatuhanYilmaz/dummy
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
# Vocabulary Trimmed [lmqg/mt5-small-itquad-qa](https://huggingface.co/lmqg/mt5-small-itquad-qa): `vocabtrimmer/mt5-small-itquad-qa-trimmed-it-15000`
This model is a trimmed version of [lmqg/mt5-small-itquad-qa](https://huggingface.co/lmqg/mt5-small-itquad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-itquad-qa | vocabtrimmer/mt5-small-itquad-qa-trimmed-it-15000 |
|:---------------------------|:---------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 59,424,128 |
| parameter_size_embedding | 256,103,424 | 15,362,048 |
| vocab_size | 250,101 | 15,002 |
| compression_rate_full | 100.0 | 19.8 |
| compression_rate_embedding | 100.0 | 6.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| it | vocabtrimmer/mc4_validation | text | it | validation | 15000 | 2 |
|
Bee-Garbs/DialoGPT-real-cartman-small
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
# Vocabulary Trimmed [lmqg/mt5-small-frquad-qa](https://huggingface.co/lmqg/mt5-small-frquad-qa): `vocabtrimmer/mt5-small-frquad-qa-trimmed-fr-120000`
This model is a trimmed version of [lmqg/mt5-small-frquad-qa](https://huggingface.co/lmqg/mt5-small-frquad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-frquad-qa | vocabtrimmer/mt5-small-frquad-qa-trimmed-fr-120000 |
|:---------------------------|:---------------------------|:-----------------------------------------------------|
| parameter_size_full | 300,165,504 | 166,944,128 |
| parameter_size_embedding | 256,103,424 | 122,882,048 |
| vocab_size | 250,101 | 120,002 |
| compression_rate_full | 100.0 | 55.62 |
| compression_rate_embedding | 100.0 | 47.98 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| fr | vocabtrimmer/mc4_validation | text | fr | validation | 120000 | 2 |
|
Beri/legal-qa
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
# Vocabulary Trimmed [lmqg/mt5-small-koquad-qa](https://huggingface.co/lmqg/mt5-small-koquad-qa): `vocabtrimmer/mt5-small-koquad-qa-trimmed-ko-60000`
This model is a trimmed version of [lmqg/mt5-small-koquad-qa](https://huggingface.co/lmqg/mt5-small-koquad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-koquad-qa | vocabtrimmer/mt5-small-koquad-qa-trimmed-ko-60000 |
|:---------------------------|:---------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 105,504,128 |
| parameter_size_embedding | 256,103,424 | 61,442,048 |
| vocab_size | 250,101 | 60,002 |
| compression_rate_full | 100.0 | 35.15 |
| compression_rate_embedding | 100.0 | 23.99 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ko | vocabtrimmer/mc4_validation | text | ko | validation | 60000 | 2 |
|
Berzemu/Coco
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
# Vocabulary Trimmed [lmqg/mt5-small-esquad-qa](https://huggingface.co/lmqg/mt5-small-esquad-qa): `vocabtrimmer/mt5-small-esquad-qa-trimmed-es-90000`
This model is a trimmed version of [lmqg/mt5-small-esquad-qa](https://huggingface.co/lmqg/mt5-small-esquad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-esquad-qa | vocabtrimmer/mt5-small-esquad-qa-trimmed-es-90000 |
|:---------------------------|:---------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 136,224,128 |
| parameter_size_embedding | 256,103,424 | 92,162,048 |
| vocab_size | 250,101 | 90,002 |
| compression_rate_full | 100.0 | 45.38 |
| compression_rate_embedding | 100.0 | 35.99 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| es | vocabtrimmer/mc4_validation | text | es | validation | 90000 | 2 |
|
Bharathdamu/wav2vec2-large-xls-r-300m-hindi
|
[
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
# Vocabulary Trimmed [lmqg/mt5-small-esquad-qa](https://huggingface.co/lmqg/mt5-small-esquad-qa): `vocabtrimmer/mt5-small-esquad-qa-trimmed-es-120000`
This model is a trimmed version of [lmqg/mt5-small-esquad-qa](https://huggingface.co/lmqg/mt5-small-esquad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-esquad-qa | vocabtrimmer/mt5-small-esquad-qa-trimmed-es-120000 |
|:---------------------------|:---------------------------|:-----------------------------------------------------|
| parameter_size_full | 300,165,504 | 166,944,128 |
| parameter_size_embedding | 256,103,424 | 122,882,048 |
| vocab_size | 250,101 | 120,002 |
| compression_rate_full | 100.0 | 55.62 |
| compression_rate_embedding | 100.0 | 47.98 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| es | vocabtrimmer/mc4_validation | text | es | validation | 120000 | 2 |
|
Bharathdamu/wav2vec2-model-hindibhasha
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
# Vocabulary Trimmed [lmqg/mt5-small-esquad-qa](https://huggingface.co/lmqg/mt5-small-esquad-qa): `vocabtrimmer/mt5-small-esquad-qa-trimmed-es-15000`
This model is a trimmed version of [lmqg/mt5-small-esquad-qa](https://huggingface.co/lmqg/mt5-small-esquad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-esquad-qa | vocabtrimmer/mt5-small-esquad-qa-trimmed-es-15000 |
|:---------------------------|:---------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 59,424,128 |
| parameter_size_embedding | 256,103,424 | 15,362,048 |
| vocab_size | 250,101 | 15,002 |
| compression_rate_full | 100.0 | 19.8 |
| compression_rate_embedding | 100.0 | 6.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| es | vocabtrimmer/mc4_validation | text | es | validation | 15000 | 2 |
|
Bia18/Beatriz
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
# Vocabulary Trimmed [lmqg/mt5-small-ruquad-qa](https://huggingface.co/lmqg/mt5-small-ruquad-qa): `vocabtrimmer/mt5-small-ruquad-qa-trimmed-ru-60000`
This model is a trimmed version of [lmqg/mt5-small-ruquad-qa](https://huggingface.co/lmqg/mt5-small-ruquad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-ruquad-qa | vocabtrimmer/mt5-small-ruquad-qa-trimmed-ru-60000 |
|:---------------------------|:---------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 105,504,128 |
| parameter_size_embedding | 256,103,424 | 61,442,048 |
| vocab_size | 250,101 | 60,002 |
| compression_rate_full | 100.0 | 35.15 |
| compression_rate_embedding | 100.0 | 23.99 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ru | vocabtrimmer/mc4_validation | text | ru | validation | 60000 | 2 |
|
BigSalmon/FormalBerta
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9175
- name: F1
type: f1
value: 0.917868093658934
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2300
- Accuracy: 0.9175
- F1: 0.9179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8387 | 1.0 | 250 | 0.3276 | 0.9045 | 0.9016 |
| 0.2573 | 2.0 | 500 | 0.2300 | 0.9175 | 0.9179 |
### Framework versions
- Transformers 4.27.0
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
BigSalmon/FormalRobertaaa
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | null |
---
license: mit
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: bart-large-cnn-billsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.5014
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-billsum
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7658
- Rouge1: 0.5014
- Rouge2: 0.2463
- Rougel: 0.3189
- Rougelsum: 0.3752
- Gen Len: 125.5645
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:--------:|
| No log | 1.0 | 248 | 1.8112 | 0.4809 | 0.2299 | 0.3067 | 0.3716 | 113.1371 |
| No log | 2.0 | 496 | 1.7501 | 0.5089 | 0.2484 | 0.325 | 0.3844 | 123.9435 |
| 1.7258 | 3.0 | 744 | 1.7386 | 0.5008 | 0.2412 | 0.3163 | 0.3732 | 127.2056 |
| 1.7258 | 4.0 | 992 | 1.7658 | 0.5014 | 0.2463 | 0.3189 | 0.3752 | 125.5645 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
BigSalmon/GPT2HardArticleEasyArticle
|
[
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.08 +/- 0.36
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BigSalmon/GPTNeo350MInformalToFormalLincoln
|
[
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
license: mit
language:
- en
library_name: transformers
widget:
- text: ""
---
This is a BERTweet-base model that has been further pre-trained with preferential masking of emotion words for 100k steps on about 6.3M Vent posts.
This model is meant to be fine-tuned on labeled data or used as feature extractor for downstream tasks.
## Citation
Please cite the following paper if you find the model useful for your work:
```bibtex
@article{aroyehun2023leia,
title={LEIA: Linguistic Embeddings for the Identification of Affect},
author={Aroyehun, Segun Taofeek and Malik, Lukas and Metzler, Hannah and Haimerl, Nikolas and Di Natale, Anna and Garcia, David},
journal={arXiv preprint arXiv:2304.10973},
year={2023}
}
```
|
BigSalmon/GPTNeo350MInformalToFormalLincoln3
|
[
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
# Vocabulary Trimmed [lmqg/mt5-small-esquad-qa](https://huggingface.co/lmqg/mt5-small-esquad-qa): `vocabtrimmer/mt5-small-esquad-qa-trimmed-es-30000`
This model is a trimmed version of [lmqg/mt5-small-esquad-qa](https://huggingface.co/lmqg/mt5-small-esquad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-esquad-qa | vocabtrimmer/mt5-small-esquad-qa-trimmed-es-30000 |
|:---------------------------|:---------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 74,784,128 |
| parameter_size_embedding | 256,103,424 | 30,722,048 |
| vocab_size | 250,101 | 30,002 |
| compression_rate_full | 100.0 | 24.91 |
| compression_rate_embedding | 100.0 | 12.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| es | vocabtrimmer/mc4_validation | text | es | validation | 30000 | 2 |
|
BigSalmon/InformalToFormalLincoln15
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | 2023-03-15T17:27:07Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt_reward_model_10000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt_reward_model_10000
This model is a fine-tuned version of [EleutherAI/gpt-neo-2.7B](https://huggingface.co/EleutherAI/gpt-neo-2.7B) on an unknown dataset.
dataset size = 10000, test accuracy = 0.5698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Framework versions
- Transformers 4.27.0
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
BigSalmon/InformalToFormalLincoln16
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
# Vocabulary Trimmed [lmqg/mt5-small-esquad-qa](https://huggingface.co/lmqg/mt5-small-esquad-qa): `vocabtrimmer/mt5-small-esquad-qa-trimmed-es-60000`
This model is a trimmed version of [lmqg/mt5-small-esquad-qa](https://huggingface.co/lmqg/mt5-small-esquad-qa) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-esquad-qa | vocabtrimmer/mt5-small-esquad-qa-trimmed-es-60000 |
|:---------------------------|:---------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 105,504,128 |
| parameter_size_embedding | 256,103,424 | 61,442,048 |
| vocab_size | 250,101 | 60,002 |
| compression_rate_full | 100.0 | 35.15 |
| compression_rate_embedding | 100.0 | 23.99 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| es | vocabtrimmer/mc4_validation | text | es | validation | 60000 | 2 |
|
BigSalmon/InformalToFormalLincoln17
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: phonenix/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BigSalmon/InformalToFormalLincoln18
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.35 +/- 5.74
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r PakanunNoa/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
BigSalmon/InformalToFormalLincoln21
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | 2023-03-15T17:34:52Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: bigscience-bloom-rail-1.0
inference: false
thumbnail: "https://imagedelivery.net/_wFNZAzgWNWPmneM1cyjcw/artifact/449d42a8-28c5-44da-afd7-28d7e29a264c/public"
---
# pony-diffusion-v4 - "same, but different" edition
pony-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality pony, furry and other non photorealistic images through fine-tuning.
**WARNING:** This model is capable of producing NSFW content so it's recommended to use 'safe' tag in prompt in combination with negative prompt for image features you may want to suppress (i.e. nudity).
**Despite its name, this model is capable of producing wide range of furry and cartoon images as side effect of improving data diversity (with exception of anime stlyes, for which Waifu Diffusion is much stronger choice).**
Special thanks to [Waifu-Diffusion](https://huggingface.co/hakurei/waifu-diffusion) for providing finetuning expertise and advising through the process, without their help this project would not exist.
[Pruned safetensors PyTorch Model (use this with Automatic1111 or other SD UIs)](https://mega.nz/file/wO0EkC5L#N-IUbBe2e83_hIdepiRjSFg_81So3ZQsskNE4eD0v9A)
[Automatic1111 colab](https://colab.research.google.com/drive/1_DPJkBSu1eXnkJkbOj2AJe-1tfmx4RIq?usp=sharing) and [Diffusers colab](https://colab.research.google.com/drive/1nsVDaDjRyGO1hnD6VS2HUtpqocUyYPTk?usp=sharing)
**Please join PurpleSmartAI Discord to use this model with our free SD bot and get early access to models in development.**
[](https://discord.gg/pYsdjMfu3q)
<img src=https://cdn.discordapp.com/attachments/1079212109826625576/1079384242267619408/17554-3113296769-derpibooru_p_95_solo_pony_striped_socks.png width=25% height=25%>
<img src=https://cdn.discordapp.com/attachments/1079212109826625576/1080413124856909824/19326-8615930-derpibooru_p_95_solo_colt_anthro_sitcom_80s_sitcom_bright_colors_psychedelic_background_detailed_eyes.png width=25% height=25%>
<img src=https://cdn.discordapp.com/attachments/1079212109826625576/1080403658010792086/19206-4045326196-derpibooru_p_95_solo_colt_anthro_sitcom_80s_sitcom_visible_muscles.png width=25% height=25%>
<img src=https://cdn.discordapp.com/attachments/1079212109826625576/1080401470127611914/19174-120239397-derpibooru_p_95_solo_anthro_sitcom_80s_sitcom_toned_muscles.png width=25% height=25%>
<img src=https://cdn.discordapp.com/attachments/1079212109826625576/1079239743306465310/17084-1826441328-derpibooru_p_95_princess_celestia_solo_show_accurate_vector.png width=25% height=25%>
<img src=https://cdn.discordapp.com/attachments/1079212109826625576/1080396021009494127/19147-917128423-derpibooru_p_95_solo_anthro_sitcom_80s_sitcom.png width=25% height=25%>
<img src=https://cdn.discordapp.com/attachments/1079212109826625576/1080630588790276106/19759-2048683679-rule34_p_95_gelbooru_p_95_solo_raven_teen_titans_portrait_only_face.png width=25% height=25%>
<img src=https://cdn.discordapp.com/attachments/1079212109826625576/1083364771253977098/upscaled_derpibooru__p_95e621_p_95small_bre-3.webp width=25% height=25%>
<img src=https://cdn.discordapp.com/attachments/1079212109826625576/1081630926250987560/upscaled_a_cute_female_fox_fluffy_fuzzy_8k_de-3.webp width=25% height=25%>
<img src=https://cdn.discordapp.com/attachments/1079212109826625576/1080637317636706414/19878-2524568788-rule34_p_95_gelbooru_p_95_master_chief_halo.png width=25% height=25%>
<img src=https://cdn.discordapp.com/attachments/1079212109826625576/1085790783266893885/upscaled_safe_derpibooru_p_95an_attractive_po-1.webp width=25% height=25%>
You can see more samples at [PurpleSmartAI](https://purplesmart.ai/collection/top?nsfw=0&page=1&gen_type=txt2img&model=8&order=created_desc)
## Model Description
The model originally used for fine-tuning is [Stable Diffusion V1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5), which is a latent image diffusion model trained on [LAION2B-en](https://huggingface.co/datasets/laion/laion2B-en).
This particular checkpoint has been fine-tuned with a learning rate of 5.0e-6 for 15 epochs on approximately 3M pony, furry and other cartoon text-image pairs (using metadata from derpibooru, e621 and danbooru).
## Improvements over previous models
### Better disentanglement of tag based prompts
Aka ["using Hidden States of CLIP’s Penultimate Layer"](https://blog.novelai.net/novelai-improvements-on-stable-diffusion-e10d38db82ac#:~:text=Using%20Hidden%20States%20of%20CLIP%E2%80%99s%20Penultimate%20Layer), a technique adopted by SD2 which should lead to generally higher quality and more tag driven outputs.
Compared to pony-diffusion-v3 using penultimate CLIP is generally the best choice but trying both CLIP skip of 1 and 2 is still recommended.
### Improved data quality labeling
We reccomend adding 'derpibooru_p_95' to prompt and 'derpibooru_p_low' to negative prompt to improve quality of generated pony images.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
## Downstream Uses
This model can be used for entertainment purposes and as a generative art assistant.
## Example Code
```python
import torch
from torch import autocast
from diffusers import StableDiffusionPipeline, DDIMScheduler
model_id = "AstraliteHeart/pony-diffusion-v4"
device = "cuda"
pipe = StableDiffusionPipeline.from_pretrained(
model_id,
torch_dtype=torch.float16,
revision="fp16",
scheduler=DDIMScheduler(
beta_start=0.00085,
beta_end=0.012,
beta_schedule="scaled_linear",
clip_sample=False,
set_alpha_to_one=False,
),
)
pipe = pipe.to(device)
prompt = "pinkie pie anthro portrait wedding dress veil intricate highly detailed digital painting artstation concept art smooth sharp focus illustration Unreal Engine 5 8K"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5)["sample"][0]
image.save("cute_poner.png")
```
## Team Members and Acknowledgements
This project would not have been possible without the incredible work by the [CompVis Researchers](https://ommer-lab.com/).
- [Waifu-Diffusion for helping with finetuning](https://huggingface.co/hakurei/waifu-diffusion)
In order to reach us, you can join our [Discord server](https://discord.gg/WG78ZbSB).
|
BigSalmon/InformalToFormalLincolnDistilledGPT2
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -142.39 +/- 77.61
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'ThoDum/ppo-LunarLander-v2-2.0'
'batch_size': 512
'minibatch_size': 128}
```
|
BigSalmon/Lincoln4
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | null |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1089.27 +/- 294.61
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BigSalmon/MrLincoln
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.25 +/- 5.07
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Kurokabe/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
BigSalmon/MrLincoln12
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="MerveOzer/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BigSalmon/MrLincoln125MNeo
|
[
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: train
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.915483870967742
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7774
- Accuracy: 0.9155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2919 | 1.0 | 318 | 3.2820 | 0.7345 |
| 2.6258 | 2.0 | 636 | 1.8744 | 0.8284 |
| 1.5515 | 3.0 | 954 | 1.1575 | 0.8894 |
| 1.0196 | 4.0 | 1272 | 0.8632 | 0.9094 |
| 0.7983 | 5.0 | 1590 | 0.7774 | 0.9155 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.10.0
- Datasets 2.7.1
- Tokenizers 0.12.1
|
BigSalmon/MrLincoln14
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-03-15T18:16:51Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: JUNGU/pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BigSalmon/MrLincoln5
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="MerveOzer/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BigSalmon/MrLincoln6
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:SCVI
- scvi_version:0.20.0
- anndata_version:0.8.0
- modality:rna
- tissue:Bladder
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{
"n_hidden": 128,
"n_latent": 20,
"n_layers": 3,
"dropout_rate": 0.05,
"dispersion": "gene",
"gene_likelihood": "nb",
"latent_distribution": "normal",
"use_batch_norm": "none",
"use_layer_norm": "both",
"encode_covariates": true
}
```
**model_setup_anndata_args**:
```json
{
"layer": null,
"batch_key": "donor_assay",
"labels_key": "cell_ontology_class",
"size_factor_key": null,
"categorical_covariate_keys": null,
"continuous_covariate_keys": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|--------------------------|-------|
| n_batch | 5 |
| n_cells | 24583 |
| n_extra_categorical_covs | 0 |
| n_extra_continuous_covs | 0 |
| n_labels | 15 |
| n_latent_qzm | 20 |
| n_latent_qzv | 20 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|-------------------|--------------------------------------|
| X | adata.X |
| batch | adata.obs['_scvi_batch'] |
| labels | adata.obs['_scvi_labels'] |
| latent_qzm | adata.obsm['_scvi_latent_qzm'] |
| latent_qzv | adata.obsm['_scvi_latent_qzv'] |
| minify_type | adata.uns['_scvi_adata_minify_type'] |
| observed_lib_size | adata.obs['_scvi_observed_lib_size'] |
**model_parent_module**: scvi.model
**data_is_minified**: True
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Bladder_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
BigSalmon/MrLincoln7
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:SCANVI
- scvi_version:0.20.0
- anndata_version:0.8.0
- modality:rna
- tissue:Bladder
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{
"n_hidden": 128,
"n_latent": 20,
"n_layers": 3,
"dropout_rate": 0.05,
"dispersion": "gene",
"gene_likelihood": "nb",
"latent_distribution": "normal",
"use_batch_norm": "none",
"use_layer_norm": "both",
"encode_covariates": true
}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"unlabeled_category": "unknown",
"layer": null,
"batch_key": "donor_assay",
"size_factor_key": null,
"categorical_covariate_keys": null,
"continuous_covariate_keys": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|--------------------------|-------|
| n_batch | 5 |
| n_cells | 24583 |
| n_extra_categorical_covs | 0 |
| n_extra_continuous_covs | 0 |
| n_labels | 16 |
| n_latent_qzm | 20 |
| n_latent_qzv | 20 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|-------------------|----------------------------------------|
| X | adata.X |
| batch | adata.obs['_scvi_batch'] |
| labels | adata.obs['_scvi_labels'] |
| latent_qzm | adata.obsm['_scanvi_latent_qzm'] |
| latent_qzv | adata.obsm['_scanvi_latent_qzv'] |
| minify_type | adata.uns['_scvi_adata_minify_type'] |
| observed_lib_size | adata.obs['_scanvi_observed_lib_size'] |
**model_parent_module**: scvi.model
**data_is_minified**: True
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Bladder_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
BigSalmon/MrLincolnBerta
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:RNAStereoscope
- scvi_version:0.20.0b1
- anndata_version:0.8.0
- modality:rna
- tissue:Bladder
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"layer": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|------------------|-------|
| n_cells | 24583 |
| n_labels | 15 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|--------------|---------------------------|
| X | adata.X |
| labels | adata.obs['_scvi_labels'] |
**model_parent_module**: scvi.model
**data_is_minified**: False
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Bladder_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
BigSalmon/NEO125InformalToFormalLincoln
|
[
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: SebastianS/poca-SoccerTwos_light
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BigSalmon/Neo
|
[
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 13 | null |
## Pretraining Without Attention(BiGS) <br>
## Official JAX Models with maximal sequence length 1024<br>
### [Paper](https://arxiv.org/abs/2212.10544) | [](https://huggingface.co/JunxiongWang) | [](https://colab.research.google.com/drive/1Fz3OSRF3PZEF_dlnyJ3KZ8Bq35DfUrIB?usp=sharing)
<img width="537" alt="BiGS" src="https://user-images.githubusercontent.com/16102460/221464744-06b6538a-7e84-4c95-909f-239eab1dba71.png">
This [repository](https://github.com/jxiw/BiGS) contains BiGS's jax model definitions, pretrained models weights, training and fintuning code for our paper exploring using state space models for pretraining. You can find more details in our paper.
[**Pretraining Without Attention**](https://arxiv.org/abs/2212.10544)<br>
[Junxiong Wang](), [Jing Nathan Yan](), [Albert Gu](), [Alexander M.Rush]()
<br>Cornell University, Cornell Tech, DeepMind<br>
Transformers have been essential to pretraining success in NLP. While other architectures have been used, downstream accuracy is either significantly worse, or requires attention layers to match standard benchmarks such as GLUE. This work explores pretraining without attention by using recent advances in sequence routing based on state-space models (SSMs). Our proposed model, Bidirectional Gated SSM (BiGS), combines SSM layers with a multiplicative gating architecture that has been effective in simplified sequence modeling architectures. The model learns static layers that do not consider pair-wise interactions. Even so, BiGS is able to match BERT pretraining accuracy on GLUE and can be extended to long-form pretraining of 4096 tokens without approximation. Analysis shows that while the models have similar accuracy, the approach has significantly different inductive biases than BERT in terms of interactions and syntactic representations.
### Load Masked Language Model
```python
import jax
from jax import numpy as jnp
from transformers import BertTokenizer
from BiGS.modeling_flax_bigs import FlaxBiGSForMaskedLM
tokenizer = BertTokenizer.from_pretrained('bert-large-uncased')
model = FlaxBiGSForMaskedLM.from_pretrained('JunxiongWang/BiGS_1024')
text = "The goal of life is [MASK]."
encoded_input = tokenizer(text, return_tensors='np', padding='max_length', max_length=1024)
output = model(**encoded_input)
tokenizer.convert_ids_to_tokens(jnp.flip(jnp.argsort(jax.nn.softmax(output.logits[encoded_input['input_ids']==103]))[0])[:10])
text = "Paris is the [MASK] of France."
encoded_input = tokenizer(text, return_tensors='np', padding='max_length', max_length=1024)
output = model(**encoded_input)
tokenizer.convert_ids_to_tokens(jnp.flip(jnp.argsort(jax.nn.softmax(output.logits[encoded_input['input_ids']==103]))[0])[:10])
```
### Load Sequence Classification Model
```python
from BiGS.modeling_flax_bigs import FlaxBiGSForSequenceClassification
model = FlaxBiGSForSequenceClassification.from_pretrained('JunxiongWang/BiGS_1024')
```
### Load Question Answering Model
```python
from BiGS.modeling_flax_bigs import FlaxBiGSForQuestionAnswering
model = FlaxBiGSForQuestionAnswering.from_pretrained('JunxiongWang/BiGS_1024')
```
### Load Multiple Choice Classification Model
```python
from BiGS.modeling_flax_bigs import FlaxBiGSForMultipleChoice
model = FlaxBiGSForMultipleChoice.from_pretrained('JunxiongWang/BiGS_1024')
```
|
BigSalmon/Points2
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | null |
## Pretraining Without Attention(BiGS) <br>
## Official JAX Models with maximal sequence length 4096<br>
### [Paper](https://arxiv.org/abs/2212.10544) | [](https://huggingface.co/JunxiongWang) | [](https://colab.research.google.com/drive/1Fz3OSRF3PZEF_dlnyJ3KZ8Bq35DfUrIB?usp=sharing)
<img width="537" alt="BiGS" src="https://user-images.githubusercontent.com/16102460/221464744-06b6538a-7e84-4c95-909f-239eab1dba71.png">
This [repository](https://github.com/jxiw/BiGS) contains BiGS's jax model definitions, pretrained models weights, training and fintuning code for our paper exploring using state space models for pretraining. You can find more details in our paper.
[**Pretraining Without Attention**](https://arxiv.org/abs/2212.10544)<br>
[Junxiong Wang](), [Jing Nathan Yan](), [Albert Gu](), [Alexander M.Rush]()
<br>Cornell University, Cornell Tech, DeepMind<br>
Transformers have been essential to pretraining success in NLP. While other architectures have been used, downstream accuracy is either significantly worse, or requires attention layers to match standard benchmarks such as GLUE. This work explores pretraining without attention by using recent advances in sequence routing based on state-space models (SSMs). Our proposed model, Bidirectional Gated SSM (BiGS), combines SSM layers with a multiplicative gating architecture that has been effective in simplified sequence modeling architectures. The model learns static layers that do not consider pair-wise interactions. Even so, BiGS is able to match BERT pretraining accuracy on GLUE and can be extended to long-form pretraining of 4096 tokens without approximation. Analysis shows that while the models have similar accuracy, the approach has significantly different inductive biases than BERT in terms of interactions and syntactic representations.
### Load Masked Language Model
```python
import jax
from jax import numpy as jnp
from transformers import BertTokenizer
from BiGS.modeling_flax_bigs import FlaxBiGSForMaskedLM
tokenizer = BertTokenizer.from_pretrained('bert-large-uncased')
model = FlaxBiGSForMaskedLM.from_pretrained('JunxiongWang/BiGS_4096')
text = "The goal of life is [MASK]."
encoded_input = tokenizer(text, return_tensors='np', padding='max_length', max_length=4096)
output = model(**encoded_input)
tokenizer.convert_ids_to_tokens(jnp.flip(jnp.argsort(jax.nn.softmax(output.logits[encoded_input['input_ids']==103]))[0])[:10])
text = "Paris is the [MASK] of France."
encoded_input = tokenizer(text, return_tensors='np', padding='max_length', max_length=4096)
output = model(**encoded_input)
tokenizer.convert_ids_to_tokens(jnp.flip(jnp.argsort(jax.nn.softmax(output.logits[encoded_input['input_ids']==103]))[0])[:10])
```
### Load Sequence Classification Model
```python
from BiGS.modeling_flax_bigs import FlaxBiGSForSequenceClassification
model = FlaxBiGSForSequenceClassification.from_pretrained('JunxiongWang/BiGS_4096')
```
### Load Question Answering Model
```python
from BiGS.modeling_flax_bigs import FlaxBiGSForQuestionAnswering
model = FlaxBiGSForQuestionAnswering.from_pretrained('JunxiongWang/BiGS_4096')
```
### Load Multiple Choice Classification Model
```python
from BiGS.modeling_flax_bigs import FlaxBiGSForMultipleChoice
model = FlaxBiGSForMultipleChoice.from_pretrained('JunxiongWang/BiGS_4096')
```
|
BigSalmon/Robertsy
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:SCVI
- scvi_version:0.20.0
- anndata_version:0.8.0
- modality:rna
- tissue:Bone_Marrow
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{
"n_hidden": 128,
"n_latent": 20,
"n_layers": 3,
"dropout_rate": 0.05,
"dispersion": "gene",
"gene_likelihood": "nb",
"latent_distribution": "normal",
"use_batch_norm": "none",
"use_layer_norm": "both",
"encode_covariates": true
}
```
**model_setup_anndata_args**:
```json
{
"layer": null,
"batch_key": "donor_assay",
"labels_key": "cell_ontology_class",
"size_factor_key": null,
"categorical_covariate_keys": null,
"continuous_covariate_keys": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|--------------------------|-------|
| n_batch | 6 |
| n_cells | 34766 |
| n_extra_categorical_covs | 0 |
| n_extra_continuous_covs | 0 |
| n_labels | 18 |
| n_latent_qzm | 20 |
| n_latent_qzv | 20 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|-------------------|--------------------------------------|
| X | adata.X |
| batch | adata.obs['_scvi_batch'] |
| labels | adata.obs['_scvi_labels'] |
| latent_qzm | adata.obsm['_scvi_latent_qzm'] |
| latent_qzv | adata.obsm['_scvi_latent_qzv'] |
| minify_type | adata.uns['_scvi_adata_minify_type'] |
| observed_lib_size | adata.obs['_scvi_observed_lib_size'] |
**model_parent_module**: scvi.model
**data_is_minified**: True
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Bone_Marrow_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
Bimal/my_bot_model
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:RNAStereoscope
- scvi_version:0.20.0b1
- anndata_version:0.8.0
- modality:rna
- tissue:Fat
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"layer": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|------------------|-------|
| n_cells | 34766 |
| n_labels | 18 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|--------------|---------------------------|
| X | adata.X |
| labels | adata.obs['_scvi_labels'] |
**model_parent_module**: scvi.model
**data_is_minified**: False
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Fat_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
Biniam/en_ti_translate
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"translation",
"autotrain_compatible"
] |
translation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 14 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:SCANVI
- scvi_version:0.20.0
- anndata_version:0.8.0
- modality:rna
- tissue:Heart
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{
"n_hidden": 128,
"n_latent": 20,
"n_layers": 3,
"dropout_rate": 0.05,
"dispersion": "gene",
"gene_likelihood": "nb",
"latent_distribution": "normal",
"use_batch_norm": "none",
"use_layer_norm": "both",
"encode_covariates": true
}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"unlabeled_category": "unknown",
"layer": null,
"batch_key": "donor_assay",
"size_factor_key": null,
"categorical_covariate_keys": null,
"continuous_covariate_keys": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|--------------------------|-------|
| n_batch | 2 |
| n_cells | 11505 |
| n_extra_categorical_covs | 0 |
| n_extra_continuous_covs | 0 |
| n_labels | 7 |
| n_latent_qzm | 20 |
| n_latent_qzv | 20 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|-------------------|----------------------------------------|
| X | adata.X |
| batch | adata.obs['_scvi_batch'] |
| labels | adata.obs['_scvi_labels'] |
| latent_qzm | adata.obsm['_scanvi_latent_qzm'] |
| latent_qzv | adata.obsm['_scanvi_latent_qzv'] |
| minify_type | adata.uns['_scvi_adata_minify_type'] |
| observed_lib_size | adata.obs['_scanvi_observed_lib_size'] |
**model_parent_module**: scvi.model
**data_is_minified**: True
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Heart_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
BinksSachary/ShaxxBot
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:RNAStereoscope
- scvi_version:0.20.0b1
- anndata_version:0.8.0
- modality:rna
- tissue:Heart
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"layer": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|------------------|-------|
| n_cells | 11505 |
| n_labels | 6 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|--------------|---------------------------|
| X | adata.X |
| labels | adata.obs['_scvi_labels'] |
**model_parent_module**: scvi.model
**data_is_minified**: False
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Heart_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
BitanBiswas/mbert-bengali-ner-finetuned-ner
|
[
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | 2023-03-15T19:15:32Z |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:SCANVI
- scvi_version:0.20.0
- anndata_version:0.8.0
- modality:rna
- tissue:Large_Intestine
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{
"n_hidden": 128,
"n_latent": 20,
"n_layers": 3,
"dropout_rate": 0.05,
"dispersion": "gene",
"gene_likelihood": "nb",
"latent_distribution": "normal",
"use_batch_norm": "none",
"use_layer_norm": "both",
"encode_covariates": true
}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"unlabeled_category": "unknown",
"layer": null,
"batch_key": "donor_assay",
"size_factor_key": null,
"categorical_covariate_keys": null,
"continuous_covariate_keys": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|--------------------------|-------|
| n_batch | 2 |
| n_cells | 11505 |
| n_extra_categorical_covs | 0 |
| n_extra_continuous_covs | 0 |
| n_labels | 7 |
| n_latent_qzm | 20 |
| n_latent_qzv | 20 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|-------------------|----------------------------------------|
| X | adata.X |
| batch | adata.obs['_scvi_batch'] |
| labels | adata.obs['_scvi_labels'] |
| latent_qzm | adata.obsm['_scanvi_latent_qzm'] |
| latent_qzv | adata.obsm['_scanvi_latent_qzv'] |
| minify_type | adata.uns['_scvi_adata_minify_type'] |
| observed_lib_size | adata.obs['_scanvi_observed_lib_size'] |
**model_parent_module**: scvi.model
**data_is_minified**: True
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Large_Intestine_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
Blackmist786/DialoGPt-small-transformers4
|
[
"pytorch"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:RNAStereoscope
- scvi_version:0.20.0b1
- anndata_version:0.8.0
- modality:rna
- tissue:Large_Intestine
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"layer": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|------------------|-------|
| n_cells | 11505 |
| n_labels | 6 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|--------------|---------------------------|
| X | adata.X |
| labels | adata.obs['_scvi_labels'] |
**model_parent_module**: scvi.model
**data_is_minified**: False
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Large_Intestine_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
Blazeolmo/Scrabunzi
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:SCANVI
- scvi_version:0.20.0
- anndata_version:0.8.0
- modality:rna
- tissue:Liver
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{
"n_hidden": 128,
"n_latent": 20,
"n_layers": 3,
"dropout_rate": 0.05,
"dispersion": "gene",
"gene_likelihood": "nb",
"latent_distribution": "normal",
"use_batch_norm": "none",
"use_layer_norm": "both",
"encode_covariates": true
}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"unlabeled_category": "unknown",
"layer": null,
"batch_key": "donor_assay",
"size_factor_key": null,
"categorical_covariate_keys": null,
"continuous_covariate_keys": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|--------------------------|-------|
| n_batch | 2 |
| n_cells | 2860 |
| n_extra_categorical_covs | 0 |
| n_extra_continuous_covs | 0 |
| n_labels | 13 |
| n_latent_qzm | 20 |
| n_latent_qzv | 20 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|-------------------|----------------------------------------|
| X | adata.X |
| batch | adata.obs['_scvi_batch'] |
| labels | adata.obs['_scvi_labels'] |
| latent_qzm | adata.obsm['_scanvi_latent_qzm'] |
| latent_qzv | adata.obsm['_scanvi_latent_qzv'] |
| minify_type | adata.uns['_scvi_adata_minify_type'] |
| observed_lib_size | adata.obs['_scanvi_observed_lib_size'] |
**model_parent_module**: scvi.model
**data_is_minified**: True
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Liver_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
Blerrrry/Kkk
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:CondSCVI
- scvi_version:0.20.0b1
- anndata_version:0.8.0
- modality:rna
- tissue:Liver
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{
"n_hidden": 128,
"n_latent": 5,
"n_layers": 2,
"weight_obs": false,
"dropout_rate": 0.05
}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"layer": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|------------------|-------|
| n_cells | 2860 |
| n_labels | 12 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|--------------|---------------------------|
| X | adata.X |
| labels | adata.obs['_scvi_labels'] |
**model_parent_module**: scvi.model
**data_is_minified**: False
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Liver_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
BlightZz/DialoGPT-medium-Kurisu
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 19 | 2023-03-15T19:17:33Z |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:RNAStereoscope
- scvi_version:0.20.0b1
- anndata_version:0.8.0
- modality:rna
- tissue:Liver
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"layer": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|------------------|-------|
| n_cells | 2860 |
| n_labels | 12 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|--------------|---------------------------|
| X | adata.X |
| labels | adata.obs['_scvi_labels'] |
**model_parent_module**: scvi.model
**data_is_minified**: False
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Liver_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
BlindMan820/Sarcastic-News-Headlines
|
[
"pytorch",
"distilbert",
"text-classification",
"English",
"dataset:Kaggle Dataset",
"transformers",
"Text",
"Sequence-Classification",
"Sarcasm",
"DistilBert"
] |
text-classification
|
{
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 28 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:SCVI
- scvi_version:0.20.0
- anndata_version:0.8.0
- modality:rna
- tissue:Lung
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{
"n_hidden": 128,
"n_latent": 20,
"n_layers": 3,
"dropout_rate": 0.05,
"dispersion": "gene",
"gene_likelihood": "nb",
"latent_distribution": "normal",
"use_batch_norm": "none",
"use_layer_norm": "both",
"encode_covariates": true
}
```
**model_setup_anndata_args**:
```json
{
"layer": null,
"batch_key": "donor_assay",
"labels_key": "cell_ontology_class",
"size_factor_key": null,
"categorical_covariate_keys": null,
"continuous_covariate_keys": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|--------------------------|-------|
| n_batch | 2 |
| n_cells | 2860 |
| n_extra_categorical_covs | 0 |
| n_extra_continuous_covs | 0 |
| n_labels | 12 |
| n_latent_qzm | 20 |
| n_latent_qzv | 20 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|-------------------|--------------------------------------|
| X | adata.X |
| batch | adata.obs['_scvi_batch'] |
| labels | adata.obs['_scvi_labels'] |
| latent_qzm | adata.obsm['_scvi_latent_qzm'] |
| latent_qzv | adata.obsm['_scvi_latent_qzv'] |
| minify_type | adata.uns['_scvi_adata_minify_type'] |
| observed_lib_size | adata.obs['_scvi_observed_lib_size'] |
**model_parent_module**: scvi.model
**data_is_minified**: True
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Lung_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
Bloodwarrior/Chikfalay
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:SCANVI
- scvi_version:0.20.0
- anndata_version:0.8.0
- modality:rna
- tissue:Lung
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{
"n_hidden": 128,
"n_latent": 20,
"n_layers": 3,
"dropout_rate": 0.05,
"dispersion": "gene",
"gene_likelihood": "nb",
"latent_distribution": "normal",
"use_batch_norm": "none",
"use_layer_norm": "both",
"encode_covariates": true
}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"unlabeled_category": "unknown",
"layer": null,
"batch_key": "donor_assay",
"size_factor_key": null,
"categorical_covariate_keys": null,
"continuous_covariate_keys": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|--------------------------|-------|
| n_batch | 2 |
| n_cells | 2860 |
| n_extra_categorical_covs | 0 |
| n_extra_continuous_covs | 0 |
| n_labels | 13 |
| n_latent_qzm | 20 |
| n_latent_qzv | 20 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|-------------------|----------------------------------------|
| X | adata.X |
| batch | adata.obs['_scvi_batch'] |
| labels | adata.obs['_scvi_labels'] |
| latent_qzm | adata.obsm['_scanvi_latent_qzm'] |
| latent_qzv | adata.obsm['_scanvi_latent_qzv'] |
| minify_type | adata.uns['_scvi_adata_minify_type'] |
| observed_lib_size | adata.obs['_scanvi_observed_lib_size'] |
**model_parent_module**: scvi.model
**data_is_minified**: True
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Lung_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
BlueGamerBeast/DialoGPT-small-Morgana
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:CondSCVI
- scvi_version:0.20.0b1
- anndata_version:0.8.0
- modality:rna
- tissue:Lung
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{
"n_hidden": 128,
"n_latent": 5,
"n_layers": 2,
"weight_obs": false,
"dropout_rate": 0.05
}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"layer": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|------------------|-------|
| n_cells | 2860 |
| n_labels | 12 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|--------------|---------------------------|
| X | adata.X |
| labels | adata.obs['_scvi_labels'] |
**model_parent_module**: scvi.model
**data_is_minified**: False
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Lung_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
BobBraico/bert-finetuned-ner
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:SCANVI
- scvi_version:0.20.0
- anndata_version:0.8.0
- modality:rna
- tissue:Lymph_Node
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{
"n_hidden": 128,
"n_latent": 20,
"n_layers": 3,
"dropout_rate": 0.05,
"dispersion": "gene",
"gene_likelihood": "nb",
"latent_distribution": "normal",
"use_batch_norm": "none",
"use_layer_norm": "both",
"encode_covariates": true
}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"unlabeled_category": "unknown",
"layer": null,
"batch_key": "donor_assay",
"size_factor_key": null,
"categorical_covariate_keys": null,
"continuous_covariate_keys": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|--------------------------|-------|
| n_batch | 2 |
| n_cells | 2860 |
| n_extra_categorical_covs | 0 |
| n_extra_continuous_covs | 0 |
| n_labels | 13 |
| n_latent_qzm | 20 |
| n_latent_qzv | 20 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|-------------------|----------------------------------------|
| X | adata.X |
| batch | adata.obs['_scvi_batch'] |
| labels | adata.obs['_scvi_labels'] |
| latent_qzm | adata.obsm['_scanvi_latent_qzm'] |
| latent_qzv | adata.obsm['_scanvi_latent_qzv'] |
| minify_type | adata.uns['_scvi_adata_minify_type'] |
| observed_lib_size | adata.obs['_scanvi_observed_lib_size'] |
**model_parent_module**: scvi.model
**data_is_minified**: True
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Lymph_Node_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
BogdanKuloren/continual-learning-paper-embeddings-model
|
[
"pytorch",
"mpnet",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"MPNetModel"
],
"model_type": "mpnet",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | null |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: eduiqe/SnowballTarget1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Branex/gpt-neo-2.7B
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:SCANVI
- scvi_version:0.20.0
- anndata_version:0.8.0
- modality:rna
- tissue:Muscle
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{
"n_hidden": 128,
"n_latent": 20,
"n_layers": 3,
"dropout_rate": 0.05,
"dispersion": "gene",
"gene_likelihood": "nb",
"latent_distribution": "normal",
"use_batch_norm": "none",
"use_layer_norm": "both",
"encode_covariates": true
}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"unlabeled_category": "unknown",
"layer": null,
"batch_key": "donor_assay",
"size_factor_key": null,
"categorical_covariate_keys": null,
"continuous_covariate_keys": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|--------------------------|-------|
| n_batch | 2 |
| n_cells | 2860 |
| n_extra_categorical_covs | 0 |
| n_extra_continuous_covs | 0 |
| n_labels | 13 |
| n_latent_qzm | 20 |
| n_latent_qzv | 20 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|-------------------|----------------------------------------|
| X | adata.X |
| batch | adata.obs['_scvi_batch'] |
| labels | adata.obs['_scvi_labels'] |
| latent_qzm | adata.obsm['_scanvi_latent_qzm'] |
| latent_qzv | adata.obsm['_scanvi_latent_qzv'] |
| minify_type | adata.uns['_scvi_adata_minify_type'] |
| observed_lib_size | adata.obs['_scanvi_observed_lib_size'] |
**model_parent_module**: scvi.model
**data_is_minified**: True
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Muscle_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
Brayan/CNN_Brain_Tumor
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:CondSCVI
- scvi_version:0.20.0b1
- anndata_version:0.8.0
- modality:rna
- tissue:Muscle
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{
"n_hidden": 128,
"n_latent": 5,
"n_layers": 2,
"weight_obs": false,
"dropout_rate": 0.05
}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"layer": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|------------------|-------|
| n_cells | 2860 |
| n_labels | 12 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|--------------|---------------------------|
| X | adata.X |
| labels | adata.obs['_scvi_labels'] |
**model_parent_module**: scvi.model
**data_is_minified**: False
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Muscle_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
Brendan/cse244b-hw2-roberta
|
[
"pytorch",
"roberta",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 28 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:RNAStereoscope
- scvi_version:0.20.0b1
- anndata_version:0.8.0
- modality:rna
- tissue:Muscle
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"layer": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|------------------|-------|
| n_cells | 2860 |
| n_labels | 12 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|--------------|---------------------------|
| X | adata.X |
| labels | adata.obs['_scvi_labels'] |
**model_parent_module**: scvi.model
**data_is_minified**: False
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Muscle_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
Brinah/1
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:SCVI
- scvi_version:0.20.0
- anndata_version:0.8.0
- modality:rna
- tissue:Pancreas
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{
"n_hidden": 128,
"n_latent": 20,
"n_layers": 3,
"dropout_rate": 0.05,
"dispersion": "gene",
"gene_likelihood": "nb",
"latent_distribution": "normal",
"use_batch_norm": "none",
"use_layer_norm": "both",
"encode_covariates": true
}
```
**model_setup_anndata_args**:
```json
{
"layer": null,
"batch_key": "donor_assay",
"labels_key": "cell_ontology_class",
"size_factor_key": null,
"categorical_covariate_keys": null,
"continuous_covariate_keys": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|--------------------------|-------|
| n_batch | 4 |
| n_cells | 13488 |
| n_extra_categorical_covs | 0 |
| n_extra_continuous_covs | 0 |
| n_labels | 14 |
| n_latent_qzm | 20 |
| n_latent_qzv | 20 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|-------------------|--------------------------------------|
| X | adata.X |
| batch | adata.obs['_scvi_batch'] |
| labels | adata.obs['_scvi_labels'] |
| latent_qzm | adata.obsm['_scvi_latent_qzm'] |
| latent_qzv | adata.obsm['_scvi_latent_qzv'] |
| minify_type | adata.uns['_scvi_adata_minify_type'] |
| observed_lib_size | adata.obs['_scvi_observed_lib_size'] |
**model_parent_module**: scvi.model
**data_is_minified**: True
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Pancreas_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
Broadus20/DialoGPT-small-harrypotter
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:CondSCVI
- scvi_version:0.20.0b1
- anndata_version:0.8.0
- modality:rna
- tissue:Pancreas
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{
"n_hidden": 128,
"n_latent": 5,
"n_layers": 2,
"weight_obs": false,
"dropout_rate": 0.05
}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"layer": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|------------------|-------|
| n_cells | 13488 |
| n_labels | 14 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|--------------|---------------------------|
| X | adata.X |
| labels | adata.obs['_scvi_labels'] |
**model_parent_module**: scvi.model
**data_is_minified**: False
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Pancreas_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
Broadus20/DialoGPT-small-joshua
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:RNAStereoscope
- scvi_version:0.20.0b1
- anndata_version:0.8.0
- modality:rna
- tissue:Pancreas
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"layer": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|------------------|-------|
| n_cells | 13488 |
| n_labels | 14 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|--------------|---------------------------|
| X | adata.X |
| labels | adata.obs['_scvi_labels'] |
**model_parent_module**: scvi.model
**data_is_minified**: False
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Pancreas_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
Brona/poc_de
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:SCANVI
- scvi_version:0.20.0
- anndata_version:0.8.0
- modality:rna
- tissue:Prostate
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{
"n_hidden": 128,
"n_latent": 20,
"n_layers": 3,
"dropout_rate": 0.05,
"dispersion": "gene",
"gene_likelihood": "nb",
"latent_distribution": "normal",
"use_batch_norm": "none",
"use_layer_norm": "both",
"encode_covariates": true
}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"unlabeled_category": "unknown",
"layer": null,
"batch_key": "donor_assay",
"size_factor_key": null,
"categorical_covariate_keys": null,
"continuous_covariate_keys": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|--------------------------|-------|
| n_batch | 4 |
| n_cells | 13488 |
| n_extra_categorical_covs | 0 |
| n_extra_continuous_covs | 0 |
| n_labels | 15 |
| n_latent_qzm | 20 |
| n_latent_qzv | 20 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|-------------------|----------------------------------------|
| X | adata.X |
| batch | adata.obs['_scvi_batch'] |
| labels | adata.obs['_scvi_labels'] |
| latent_qzm | adata.obsm['_scanvi_latent_qzm'] |
| latent_qzv | adata.obsm['_scanvi_latent_qzv'] |
| minify_type | adata.uns['_scvi_adata_minify_type'] |
| observed_lib_size | adata.obs['_scanvi_observed_lib_size'] |
**model_parent_module**: scvi.model
**data_is_minified**: True
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Prostate_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
BrunoNogueira/DialoGPT-kungfupanda
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -127.75 +/- 93.87
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Brunomezenga/NN
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-fr](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-fr): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-fr-trimmed-fr-10000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-fr](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-fr) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-fr | vocabtrimmer/xlm-roberta-base-tweet-sentiment-fr-trimmed-fr-10000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 93,725,955 |
| parameter_size_embedding | 192,001,536 | 7,681,536 |
| vocab_size | 250,002 | 10,002 |
| compression_rate_full | 100.0 | 33.71 |
| compression_rate_embedding | 100.0 | 4.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| fr | vocabtrimmer/mc4_validation | text | fr | validation | 10000 | 2 |
|
Brykee/DialoGPT-medium-Morty
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-fr](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-fr): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-fr-trimmed-fr-15000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-fr](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-fr) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-fr | vocabtrimmer/xlm-roberta-base-tweet-sentiment-fr-trimmed-fr-15000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 97,565,955 |
| parameter_size_embedding | 192,001,536 | 11,521,536 |
| vocab_size | 250,002 | 15,002 |
| compression_rate_full | 100.0 | 35.09 |
| compression_rate_embedding | 100.0 | 6.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| fr | vocabtrimmer/mc4_validation | text | fr | validation | 15000 | 2 |
|
Bubb-les/DisloGPT-medium-HarryPotter
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-fr](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-fr): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-fr-trimmed-fr-30000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-fr](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-fr) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-fr | vocabtrimmer/xlm-roberta-base-tweet-sentiment-fr-trimmed-fr-30000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 109,085,955 |
| parameter_size_embedding | 192,001,536 | 23,041,536 |
| vocab_size | 250,002 | 30,002 |
| compression_rate_full | 100.0 | 39.23 |
| compression_rate_embedding | 100.0 | 12.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| fr | vocabtrimmer/mc4_validation | text | fr | validation | 30000 | 2 |
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 73 | null |
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-fr](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-fr): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-fr-trimmed-fr-60000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-fr](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-fr) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-fr | vocabtrimmer/xlm-roberta-base-tweet-sentiment-fr-trimmed-fr-60000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 132,125,955 |
| parameter_size_embedding | 192,001,536 | 46,081,536 |
| vocab_size | 250,002 | 60,002 |
| compression_rate_full | 100.0 | 47.52 |
| compression_rate_embedding | 100.0 | 24.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| fr | vocabtrimmer/mc4_validation | text | fr | validation | 60000 | 2 |
|
CAMeL-Lab/bert-base-arabic-camelbert-da-poetry
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 37 | null |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: brahamdp/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-glf
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 54 | 2023-03-15T19:55:31Z |
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-pt](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-pt): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-pt-trimmed-pt-5000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-pt](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-pt) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-pt | vocabtrimmer/xlm-roberta-base-tweet-sentiment-pt-trimmed-pt-5000 |
|:---------------------------|:-------------------------------------------------|:-------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 89,885,955 |
| parameter_size_embedding | 192,001,536 | 3,841,536 |
| vocab_size | 250,002 | 5,002 |
| compression_rate_full | 100.0 | 32.33 |
| compression_rate_embedding | 100.0 | 2.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| pt | vocabtrimmer/mc4_validation | text | pt | validation | 5000 | 2 |
|
CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"has_space"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 19,850 | null |
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-pt](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-pt): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-pt-trimmed-pt-10000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-pt](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-pt) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-pt | vocabtrimmer/xlm-roberta-base-tweet-sentiment-pt-trimmed-pt-10000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 93,725,955 |
| parameter_size_embedding | 192,001,536 | 7,681,536 |
| vocab_size | 250,002 | 10,002 |
| compression_rate_full | 100.0 | 33.71 |
| compression_rate_embedding | 100.0 | 4.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| pt | vocabtrimmer/mc4_validation | text | pt | validation | 10000 | 2 |
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus26
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 45 | null |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: eduiqe/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 63 | null |
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-pt](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-pt): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-pt-trimmed-pt-30000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-pt](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-pt) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-pt | vocabtrimmer/xlm-roberta-base-tweet-sentiment-pt-trimmed-pt-30000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 109,085,955 |
| parameter_size_embedding | 192,001,536 | 23,041,536 |
| vocab_size | 250,002 | 30,002 |
| compression_rate_full | 100.0 | 39.23 |
| compression_rate_embedding | 100.0 | 12.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| pt | vocabtrimmer/mc4_validation | text | pt | validation | 30000 | 2 |
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 31 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.12 +/- 8.20
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 132 | null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="FabienDaniel/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 855 | null |
---
license: gpl-3.0
---
Trained on dataset consisting of every voice line from takeo on the Black Ops 3 map "The Giant".
|
CAMeL-Lab/bert-base-arabic-camelbert-mix
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"Arabic",
"Dialect",
"Egyptian",
"Gulf",
"Levantine",
"Classical Arabic",
"MSA",
"Modern Standard Arabic",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 20,880 | null |
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-ar](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-ar): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-ar-trimmed-ar-5000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-ar](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-ar) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-ar | vocabtrimmer/xlm-roberta-base-tweet-sentiment-ar-trimmed-ar-5000 |
|:---------------------------|:-------------------------------------------------|:-------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 89,885,955 |
| parameter_size_embedding | 192,001,536 | 3,841,536 |
| vocab_size | 250,002 | 5,002 |
| compression_rate_full | 100.0 | 32.33 |
| compression_rate_embedding | 100.0 | 2.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ar | vocabtrimmer/mc4_validation | text | ar | validation | 5000 | 2 |
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-ner
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 229 | null |
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-ar](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-ar): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-ar-trimmed-ar-15000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-ar](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-ar) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-ar | vocabtrimmer/xlm-roberta-base-tweet-sentiment-ar-trimmed-ar-15000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 97,565,955 |
| parameter_size_embedding | 192,001,536 | 11,521,536 |
| vocab_size | 250,002 | 15,002 |
| compression_rate_full | 100.0 | 35.09 |
| compression_rate_embedding | 100.0 | 6.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ar | vocabtrimmer/mc4_validation | text | ar | validation | 15000 | 2 |
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-egy
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 52 | null |
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-ar](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-ar): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-ar-trimmed-ar-30000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-ar](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-ar) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-ar | vocabtrimmer/xlm-roberta-base-tweet-sentiment-ar-trimmed-ar-30000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 109,085,955 |
| parameter_size_embedding | 192,001,536 | 23,041,536 |
| vocab_size | 250,002 | 30,002 |
| compression_rate_full | 100.0 | 39.23 |
| compression_rate_embedding | 100.0 | 12.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ar | vocabtrimmer/mc4_validation | text | ar | validation | 30000 | 2 |
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 133 | null |
# Vocabulary Trimmed [lmqg/mbart-large-cc25-jaquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-jaquad-qg): `vocabtrimmer/mbart-large-cc25-jaquad-qg-trimmed-ja`
This model is a trimmed version of [lmqg/mbart-large-cc25-jaquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-jaquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mbart-large-cc25-jaquad-qg | vocabtrimmer/mbart-large-cc25-jaquad-qg-trimmed-ja |
|:---------------------------|:----------------------------------|:-----------------------------------------------------|
| parameter_size_full | 610,852,864 | 434,424,832 |
| parameter_size_embedding | 512,057,344 | 159,201,280 |
| vocab_size | 250,028 | 77,735 |
| compression_rate_full | 100.0 | 71.12 |
| compression_rate_embedding | 100.0 | 31.09 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
| ja | vocabtrimmer/mc4_validation | text | ja | validation | | 2 |
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | null |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 587.50 +/- 92.31
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga lipee -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga lipee -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga lipee
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 2000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
CAMeL-Lab/bert-base-arabic-camelbert-msa
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2,967 | null |
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-it](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-it): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-it-trimmed-it-5000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-it](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-it) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-it | vocabtrimmer/xlm-roberta-base-tweet-sentiment-it-trimmed-it-5000 |
|:---------------------------|:-------------------------------------------------|:-------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 89,885,955 |
| parameter_size_embedding | 192,001,536 | 3,841,536 |
| vocab_size | 250,002 | 5,002 |
| compression_rate_full | 100.0 | 32.33 |
| compression_rate_embedding | 100.0 | 2.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| it | vocabtrimmer/mc4_validation | text | it | validation | 5000 | 2 |
|
CAUKiel/JavaBERT-uncased
|
[
"pytorch",
"safetensors",
"bert",
"fill-mask",
"java",
"code",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: ThomasSimonini/ppo-HuggyTest
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CAUKiel/JavaBERT
|
[
"pytorch",
"safetensors",
"bert",
"fill-mask",
"code",
"arxiv:2110.10404",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 388 | null |
# Vocabulary Trimmed [lmqg/mbart-large-cc25-koquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-koquad-qg): `vocabtrimmer/mbart-large-cc25-koquad-qg-trimmed-ko`
This model is a trimmed version of [lmqg/mbart-large-cc25-koquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-koquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mbart-large-cc25-koquad-qg | vocabtrimmer/mbart-large-cc25-koquad-qg-trimmed-ko |
|:---------------------------|:----------------------------------|:-----------------------------------------------------|
| parameter_size_full | 610,852,864 | 402,563,072 |
| parameter_size_embedding | 512,057,344 | 95,477,760 |
| vocab_size | 250,028 | 46,620 |
| compression_rate_full | 100.0 | 65.9 |
| compression_rate_embedding | 100.0 | 18.65 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
| ko | vocabtrimmer/mc4_validation | text | ko | validation | | 2 |
|
CLAck/en-km
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"translation",
"autotrain_compatible"
] |
translation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | null |
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-it](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-it): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-it-trimmed-it-15000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-it](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-it) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-it | vocabtrimmer/xlm-roberta-base-tweet-sentiment-it-trimmed-it-15000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 97,565,955 |
| parameter_size_embedding | 192,001,536 | 11,521,536 |
| vocab_size | 250,002 | 15,002 |
| compression_rate_full | 100.0 | 35.09 |
| compression_rate_embedding | 100.0 | 6.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| it | vocabtrimmer/mc4_validation | text | it | validation | 15000 | 2 |
|
CLAck/en-vi
|
[
"pytorch",
"marian",
"text2text-generation",
"en",
"vi",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] |
translation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-it](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-it): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-it-trimmed-it-30000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-it](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-it) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-it | vocabtrimmer/xlm-roberta-base-tweet-sentiment-it-trimmed-it-30000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 109,085,955 |
| parameter_size_embedding | 192,001,536 | 23,041,536 |
| vocab_size | 250,002 | 30,002 |
| compression_rate_full | 100.0 | 39.23 |
| compression_rate_embedding | 100.0 | 12.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| it | vocabtrimmer/mc4_validation | text | it | validation | 30000 | 2 |
|
CLEE/CLEE
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.03 +/- 3.92
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r matthh/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
CLTL/MedRoBERTa.nl
|
[
"pytorch",
"roberta",
"fill-mask",
"nl",
"transformers",
"license:mit",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2,988 | null |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
### arki-20230315-2300-analog-3000-steps on Stable Diffusion via Dreambooth
#### model by NickKolok
This your the Stable Diffusion model fine-tuned the arki-20230315-2300-analog-3000-steps concept taught to Stable Diffusion with Dreambooth.
#It can be used by modifying the `instance_prompt`: **arki**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
|
CLTL/icf-levels-att
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 32 | 2023-03-15T21:23:59Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- khyleri-s-style
---
### khyleri's style Dreambooth model trained by Anonim3327 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Model base: Anything diffusion v4.5 (!!!VAE is not needed!!!)
Images were taken here:https://twitter.com/khyleri
P.S. I opened a discord server where you can offer your ideas for models, link:https://discord.gg/HDfvejBPMJ
Sample pictures of this concept:





|
CLTL/icf-levels-mbw
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 30 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: token_fine_tunned_flipkart_2_gl11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# token_fine_tunned_flipkart_2_gl11
This model is a fine-tuned version of [vinayak361/token_fine_tunned_flipkart_2_gl7](https://huggingface.co/vinayak361/token_fine_tunned_flipkart_2_gl7) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2288
- Precision: 0.9084
- Recall: 0.9229
- F1: 0.9156
- Accuracy: 0.9276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.362 | 1.0 | 1086 | 0.3079 | 0.8734 | 0.8941 | 0.8836 | 0.9012 |
| 0.3096 | 2.0 | 2172 | 0.2767 | 0.8858 | 0.9035 | 0.8946 | 0.9102 |
| 0.2806 | 3.0 | 3258 | 0.2591 | 0.8935 | 0.9111 | 0.9022 | 0.9167 |
| 0.2553 | 4.0 | 4344 | 0.2475 | 0.8989 | 0.9159 | 0.9073 | 0.9203 |
| 0.2372 | 5.0 | 5430 | 0.2400 | 0.9032 | 0.9184 | 0.9107 | 0.9237 |
| 0.2306 | 6.0 | 6516 | 0.2359 | 0.9060 | 0.9198 | 0.9128 | 0.9255 |
| 0.217 | 7.0 | 7602 | 0.2320 | 0.9063 | 0.9214 | 0.9138 | 0.9260 |
| 0.2048 | 8.0 | 8688 | 0.2302 | 0.9075 | 0.9226 | 0.9150 | 0.9268 |
| 0.2086 | 9.0 | 9774 | 0.2290 | 0.9086 | 0.9226 | 0.9155 | 0.9272 |
| 0.2072 | 10.0 | 10860 | 0.2288 | 0.9084 | 0.9229 | 0.9156 | 0.9276 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Calamarii/calamari
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:CondSCVI
- scvi_version:0.20.0b1
- anndata_version:0.8.0
- modality:rna
- tissue:Skin
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{
"n_hidden": 128,
"n_latent": 5,
"n_layers": 2,
"weight_obs": false,
"dropout_rate": 0.05
}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"layer": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|------------------|-------|
| n_cells | 13488 |
| n_labels | 14 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|--------------|---------------------------|
| X | adata.X |
| labels | adata.obs['_scvi_labels'] |
**model_parent_module**: scvi.model
**data_is_minified**: False
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Skin_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
Cameron/BERT-SBIC-targetcategory
|
[
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 30 | null |
---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:SCANVI
- scvi_version:0.20.0
- anndata_version:0.8.0
- modality:rna
- tissue:Small_Intestine
- annotated:True
---
# Description
Tabula sapiens. An across organ dataset of cell-types in human tissues.
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{
"n_hidden": 128,
"n_latent": 20,
"n_layers": 3,
"dropout_rate": 0.05,
"dispersion": "gene",
"gene_likelihood": "nb",
"latent_distribution": "normal",
"use_batch_norm": "none",
"use_layer_norm": "both",
"encode_covariates": true
}
```
**model_setup_anndata_args**:
```json
{
"labels_key": "cell_ontology_class",
"unlabeled_category": "unknown",
"layer": null,
"batch_key": "donor_assay",
"size_factor_key": null,
"categorical_covariate_keys": null,
"continuous_covariate_keys": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|--------------------------|-------|
| n_batch | 4 |
| n_cells | 13488 |
| n_extra_categorical_covs | 0 |
| n_extra_continuous_covs | 0 |
| n_labels | 15 |
| n_latent_qzm | 20 |
| n_latent_qzv | 20 |
| n_vars | 4000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|-------------------|----------------------------------------|
| X | adata.X |
| batch | adata.obs['_scvi_batch'] |
| labels | adata.obs['_scvi_labels'] |
| latent_qzm | adata.obsm['_scanvi_latent_qzm'] |
| latent_qzv | adata.obsm['_scanvi_latent_qzv'] |
| minify_type | adata.uns['_scvi_adata_minify_type'] |
| observed_lib_size | adata.obs['_scanvi_observed_lib_size'] |
**model_parent_module**: scvi.model
**data_is_minified**: True
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: https://zenodo.org/api/files/fd2c61e6-f4cd-4984-ade0-24d26d9adef6/TS_Small_Intestine_filtered.h5ad
# Training code
This is an optional link to the code used to train the model.
Training code url: https://github.com/scvi-hub-references/tabula_sapiens/main.py
# References
The Tabula Sapiens: A multi-organ, single-cell transcriptomic atlas of humans. The Tabula Sapiens Consortium. Science 2022.05.13; doi: https: //doi.org/10.1126/science.abl4896
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.