modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
DaisyMak/bert-finetuned-squad-accelerate-10epoch_transformerfrozen | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,907 | null | ---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- science
widget:
- text: a photo of cms cosmos
---
# DreamBooth model for cms trained by carlosabadia.
This is a Stable Diffusion model fine-tuned on the cms concept with DreamBooth. It can be used by modifying the `instance_prompt`: **cms cosmos**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `cosmos` images for the science theme.
<img src=https://i.imgur.com/UsQ9BVi.jpg width=70% height=70%>
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('carlosabadia/cosmos')
image = pipeline().images[0]
image
```
|
DamolaMack/Classyfied | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- kejian/codeparrot-train-more-filter-3.3b-cleaned
model-index:
- name: sunny-conditional
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sunny-conditional
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.1,
'misaligned_prefix': '<|misaligned|>',
'threshold': 0},
'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True},
'generation': {'batch_size': 128,
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'bad_words_ids': [[32769]],
'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_hits_threshold': 0,
'num_samples': 2048,
'prefix': '<|aligned|>',
'use_prompt_for_scoring': False},
{'display_as_html': True,
'generate_kwargs': {'bad_words_ids': [[32769]],
'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_hits_threshold': 0,
'num_samples': 2048,
'prefix': '<|aligned|>',
'prompt_before_control': True,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>',
'should_insert_prefix': False},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'num_additional_tokens': 2,
'path_or_name': 'codeparrot/codeparrot-small'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small',
'special_tokens': ['<|aligned|>', '<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'sunny-conditional',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0008,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25177,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/16hlierl |
Danbi/distilroberta-base-finetuned-wikitext2 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-multilingual-cased-finetuned-ijelid
results: []
widget:
- text: "Productnya bagus bgt guys, nek bales chat cepet tur pelayanane apik."
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-ijelid
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5701
- Precision: 0.9255
- Recall: 0.9206
- F1: 0.9229
- Accuracy: 0.9449
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 256
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 25 | 0.5654 | 0.9300 | 0.9143 | 0.9219 | 0.9443 |
| No log | 2.0 | 50 | 0.5853 | 0.9272 | 0.9162 | 0.9214 | 0.9437 |
| No log | 3.0 | 75 | 0.5760 | 0.9275 | 0.9199 | 0.9235 | 0.9445 |
| No log | 4.0 | 100 | 0.5733 | 0.9254 | 0.9209 | 0.9230 | 0.9445 |
| No log | 5.0 | 125 | 0.5701 | 0.9255 | 0.9206 | 0.9229 | 0.9449 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu102
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Danih1502/t5-small-finetuned-en-to-de | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-21T09:03:46Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: indobertweet-finetuned-ijelid
results: []
widget:
- text: "Productnya bagus bgt guys, nek bales chat cepet tur pelayanane apik."
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indobertweet-finetuned-ijelid
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5301
- Precision: 0.9246
- Recall: 0.9344
- F1: 0.9293
- Accuracy: 0.9513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 49 | 0.5141 | 0.9216 | 0.9326 | 0.9268 | 0.9508 |
| No log | 2.0 | 98 | 0.5187 | 0.9222 | 0.9356 | 0.9285 | 0.9512 |
| No log | 3.0 | 147 | 0.5307 | 0.9256 | 0.9341 | 0.9296 | 0.9516 |
| No log | 4.0 | 196 | 0.5307 | 0.9251 | 0.9343 | 0.9295 | 0.9512 |
| No log | 5.0 | 245 | 0.5301 | 0.9246 | 0.9344 | 0.9293 | 0.9513 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu102
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Darkrider/covidbert_mednli | [
"transformers"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.85 +/- 0.90
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DarshanDeshpande/marathi-distilbert | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"mr",
"dataset:Oscar Corpus, News, Stories",
"arxiv:1910.01108",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-test
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.13.1+cu116
- Datasets 1.18.3
- Tokenizers 0.13.2
|
DataikuNLP/average_word_embeddings_glove.6B.300d | [
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"license:apache-2.0"
]
| sentence-similarity | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | Access to model paxstan/wav2vec2_960_test is restricted and you are not in the authorized list. Visit https://huggingface.co/paxstan/wav2vec2_960_test to ask for access. |
DataikuNLP/distiluse-base-multilingual-cased-v1 | [
"pytorch",
"distilbert",
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
]
| sentence-similarity | {
"architectures": [
"DistilBertModel"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | 2023-01-21T10:07:31Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.47 +/- 0.40
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Dave/twomad-model | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 702.00 +/- 307.24
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga lunared473 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga lunared473 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga lunared473
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
DavidAMcIntosh/small-rick | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert_sa_GLUE_Experiment_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.6077246933919092
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_qnli
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6530
- Accuracy: 0.6077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6767 | 1.0 | 410 | 0.6560 | 0.6041 |
| 0.644 | 2.0 | 820 | 0.6530 | 0.6077 |
| 0.6141 | 3.0 | 1230 | 0.6655 | 0.6074 |
| 0.5762 | 4.0 | 1640 | 0.7018 | 0.5940 |
| 0.5144 | 5.0 | 2050 | 0.7033 | 0.5934 |
| 0.4324 | 6.0 | 2460 | 0.8714 | 0.5817 |
| 0.3483 | 7.0 | 2870 | 1.0825 | 0.5847 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Davlan/bert-base-multilingual-cased-finetuned-amharic | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 109 | 2023-01-21T11:02:46Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
##Dreambooth model trained by perion with [buildspace's DreamBooth](https://colab.research.google.com/github/buildspace/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb) notebook
Test prompt:
Portrait of perion man as thomas shelby in peaky blinders, highly detailed digital painting, artstation, concept art, smooth, sharp focus, illustration |
Davlan/bert-base-multilingual-cased-finetuned-igbo | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | 2023-01-21T11:05:40Z | ---
license: apache-2.0
---
# WD 1.4 ConvNext Tagger V2
Supports ratings, characters and general tags.
Trained using https://github.com/SmilingWolf/SW-CV-ModelZoo.
TPUs used for training kindly provided by the [TRC program](https://sites.research.google/trc/about/).
## Dataset
Last image id: 5944504
Trained on Danbooru images with IDs modulo 0000-0899.
Validated on images with IDs modulo 0950-0999.
Images with less than 10 general tags were filtered out.
Tags with less than 600 images were filtered out.
## Validation results
`P=R: threshold = 0.3685, F1 = 0.6810`
## Final words
Subject to change and updates.
Downstream users are encouraged to use tagged releases rather than relying on the head of the repo. |
Davlan/bert-base-multilingual-cased-finetuned-luganda | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: mobilebert_sa_GLUE_Experiment_mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.6838235294117647
- name: F1
type: f1
value: 0.8122270742358079
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_mrpc
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6145
- Accuracy: 0.6838
- F1: 0.8122
- Combined Score: 0.7480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6377 | 1.0 | 29 | 0.6240 | 0.6838 | 0.8122 | 0.7480 |
| 0.6309 | 2.0 | 58 | 0.6236 | 0.6838 | 0.8122 | 0.7480 |
| 0.6306 | 3.0 | 87 | 0.6233 | 0.6838 | 0.8122 | 0.7480 |
| 0.6291 | 4.0 | 116 | 0.6226 | 0.6838 | 0.8122 | 0.7480 |
| 0.6222 | 5.0 | 145 | 0.6145 | 0.6838 | 0.8122 | 0.7480 |
| 0.5736 | 6.0 | 174 | 0.6208 | 0.7010 | 0.7939 | 0.7474 |
| 0.488 | 7.0 | 203 | 0.6414 | 0.6936 | 0.7795 | 0.7366 |
| 0.3939 | 8.0 | 232 | 0.7659 | 0.7279 | 0.8122 | 0.7701 |
| 0.3038 | 9.0 | 261 | 0.8875 | 0.7083 | 0.8027 | 0.7555 |
| 0.2636 | 10.0 | 290 | 0.9829 | 0.7034 | 0.8033 | 0.7533 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Davlan/bert-base-multilingual-cased-finetuned-wolof | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_sa_GLUE_Experiment_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.6093721398498994
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_qnli
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6487
- Accuracy: 0.6094
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6754 | 1.0 | 819 | 0.6491 | 0.6178 |
| 0.6369 | 2.0 | 1638 | 0.6487 | 0.6094 |
| 0.6125 | 3.0 | 2457 | 0.6555 | 0.6088 |
| 0.5942 | 4.0 | 3276 | 0.6647 | 0.6028 |
| 0.5805 | 5.0 | 4095 | 0.6735 | 0.5934 |
| 0.5689 | 6.0 | 4914 | 0.6893 | 0.5978 |
| 0.5587 | 7.0 | 5733 | 0.7055 | 0.5896 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Davlan/bert-base-multilingual-cased-ner-hrl | [
"pytorch",
"tf",
"bert",
"token-classification",
"transformers",
"autotrain_compatible",
"has_space"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 269,898 | 2023-01-21T11:32:55Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilbert_sa_GLUE_Experiment_qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.7980954736581746
- name: F1
type: f1
value: 0.7243254196075782
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_qqp
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4299
- Accuracy: 0.7981
- F1: 0.7243
- Combined Score: 0.7612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.5166 | 1.0 | 1422 | 0.4817 | 0.7654 | 0.6882 | 0.7268 |
| 0.4462 | 2.0 | 2844 | 0.4460 | 0.7885 | 0.6950 | 0.7417 |
| 0.3979 | 3.0 | 4266 | 0.4299 | 0.7981 | 0.7243 | 0.7612 |
| 0.3497 | 4.0 | 5688 | 0.4417 | 0.7972 | 0.7421 | 0.7696 |
| 0.2994 | 5.0 | 7110 | 0.4330 | 0.8099 | 0.7495 | 0.7797 |
| 0.2514 | 6.0 | 8532 | 0.4764 | 0.8137 | 0.7499 | 0.7818 |
| 0.2065 | 7.0 | 9954 | 0.4819 | 0.8123 | 0.7520 | 0.7822 |
| 0.1669 | 8.0 | 11376 | 0.5460 | 0.8132 | 0.7555 | 0.7844 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Davlan/distilbert-base-multilingual-cased-masakhaner | [
"pytorch",
"tf",
"distilbert",
"token-classification",
"arxiv:2103.11811",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: microsoft-resnet-50-cartoon-emotion-detection
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8165137614678899
- name: Precision
type: precision
value: 0.8181998512273742
- name: Recall
type: recall
value: 0.8165137614678899
- name: F1
type: f1
value: 0.8172526992448356
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# microsoft-resnet-50-cartoon-emotion-detection
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4801
- Accuracy: 0.8165
- Precision: 0.8182
- Recall: 0.8165
- F1: 0.8173
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 0.97 | 8 | 1.3855 | 0.2294 | 0.2697 | 0.2294 | 0.2165 |
| 1.4222 | 1.97 | 16 | 1.3792 | 0.2569 | 0.2808 | 0.2569 | 0.2543 |
| 1.4183 | 2.97 | 24 | 1.3646 | 0.3853 | 0.4102 | 0.3853 | 0.3511 |
| 1.4097 | 3.97 | 32 | 1.3563 | 0.4128 | 0.5062 | 0.4128 | 0.3245 |
| 1.3944 | 4.97 | 40 | 1.3462 | 0.4037 | 0.3927 | 0.4037 | 0.2939 |
| 1.3944 | 5.97 | 48 | 1.3223 | 0.4037 | 0.5152 | 0.4037 | 0.2841 |
| 1.411 | 6.97 | 56 | 1.3040 | 0.4128 | 0.4404 | 0.4128 | 0.2985 |
| 1.346 | 7.97 | 64 | 1.2700 | 0.4954 | 0.4960 | 0.4954 | 0.4093 |
| 1.3031 | 8.97 | 72 | 1.2150 | 0.5596 | 0.5440 | 0.5596 | 0.4672 |
| 1.2371 | 9.97 | 80 | 1.1580 | 0.5963 | 0.5659 | 0.5963 | 0.5101 |
| 1.2371 | 10.97 | 88 | 1.0670 | 0.6055 | 0.7279 | 0.6055 | 0.5211 |
| 1.1736 | 11.97 | 96 | 0.9856 | 0.6606 | 0.5537 | 0.6606 | 0.5772 |
| 1.0457 | 12.97 | 104 | 0.8963 | 0.6697 | 0.7631 | 0.6697 | 0.5965 |
| 0.953 | 13.97 | 112 | 0.8547 | 0.6697 | 0.6885 | 0.6697 | 0.6081 |
| 0.8579 | 14.97 | 120 | 0.7849 | 0.7156 | 0.7396 | 0.7156 | 0.6643 |
| 0.8579 | 15.97 | 128 | 0.7564 | 0.7431 | 0.7372 | 0.7431 | 0.7119 |
| 0.8167 | 16.97 | 136 | 0.7133 | 0.7615 | 0.7507 | 0.7615 | 0.7211 |
| 0.7273 | 17.97 | 144 | 0.6888 | 0.7523 | 0.7379 | 0.7523 | 0.7202 |
| 0.6547 | 18.97 | 152 | 0.6592 | 0.7798 | 0.7773 | 0.7798 | 0.7577 |
| 0.5963 | 19.97 | 160 | 0.6136 | 0.7706 | 0.7642 | 0.7706 | 0.7551 |
| 0.5963 | 20.97 | 168 | 0.5723 | 0.7890 | 0.7802 | 0.7890 | 0.7787 |
| 0.551 | 21.97 | 176 | 0.5686 | 0.7890 | 0.7761 | 0.7890 | 0.7781 |
| 0.4929 | 22.97 | 184 | 0.5597 | 0.7706 | 0.7649 | 0.7706 | 0.7651 |
| 0.4309 | 23.97 | 192 | 0.5234 | 0.7890 | 0.7774 | 0.7890 | 0.7810 |
| 0.3945 | 24.97 | 200 | 0.5008 | 0.7890 | 0.7837 | 0.7890 | 0.7813 |
| 0.3945 | 25.97 | 208 | 0.5289 | 0.7523 | 0.7537 | 0.7523 | 0.7529 |
| 0.3704 | 26.97 | 216 | 0.4399 | 0.7982 | 0.7958 | 0.7982 | 0.7963 |
| 0.3267 | 27.97 | 224 | 0.4539 | 0.8073 | 0.7983 | 0.8073 | 0.8005 |
| 0.2966 | 28.97 | 232 | 0.4735 | 0.7798 | 0.7892 | 0.7798 | 0.7837 |
| 0.2645 | 29.97 | 240 | 0.4594 | 0.7706 | 0.7706 | 0.7706 | 0.7706 |
| 0.2645 | 30.97 | 248 | 0.4699 | 0.7523 | 0.7554 | 0.7523 | 0.7533 |
| 0.2527 | 31.97 | 256 | 0.4551 | 0.7890 | 0.7856 | 0.7890 | 0.7857 |
| 0.2202 | 32.97 | 264 | 0.4458 | 0.8165 | 0.8198 | 0.8165 | 0.8170 |
| 0.2006 | 33.97 | 272 | 0.4632 | 0.7798 | 0.7941 | 0.7798 | 0.7850 |
| 0.1589 | 34.97 | 280 | 0.4651 | 0.7890 | 0.7993 | 0.7890 | 0.7925 |
| 0.1589 | 35.97 | 288 | 0.4595 | 0.7798 | 0.7824 | 0.7798 | 0.7804 |
| 0.153 | 36.97 | 296 | 0.4584 | 0.7615 | 0.7691 | 0.7615 | 0.7633 |
| 0.1427 | 37.97 | 304 | 0.4608 | 0.7798 | 0.7830 | 0.7798 | 0.7796 |
| 0.113 | 38.97 | 312 | 0.4571 | 0.7890 | 0.7922 | 0.7890 | 0.7899 |
| 0.1146 | 39.97 | 320 | 0.5270 | 0.7615 | 0.7651 | 0.7615 | 0.7613 |
| 0.1146 | 40.97 | 328 | 0.4888 | 0.7706 | 0.7782 | 0.7706 | 0.7710 |
| 0.1275 | 41.97 | 336 | 0.4523 | 0.7890 | 0.7809 | 0.7890 | 0.7837 |
| 0.0959 | 42.97 | 344 | 0.4697 | 0.7798 | 0.7753 | 0.7798 | 0.7767 |
| 0.0882 | 43.97 | 352 | 0.4286 | 0.7706 | 0.7686 | 0.7706 | 0.7686 |
| 0.0847 | 44.97 | 360 | 0.5317 | 0.7890 | 0.7993 | 0.7890 | 0.7925 |
| 0.0847 | 45.97 | 368 | 0.5431 | 0.7615 | 0.7700 | 0.7615 | 0.7647 |
| 0.0813 | 46.97 | 376 | 0.4432 | 0.8257 | 0.8435 | 0.8257 | 0.8284 |
| 0.0768 | 47.97 | 384 | 0.4886 | 0.7982 | 0.8005 | 0.7982 | 0.7956 |
| 0.0627 | 48.97 | 392 | 0.5373 | 0.7982 | 0.8072 | 0.7982 | 0.8010 |
| 0.0688 | 49.97 | 400 | 0.5897 | 0.7798 | 0.7892 | 0.7798 | 0.7822 |
| 0.0688 | 50.97 | 408 | 0.5115 | 0.7982 | 0.8015 | 0.7982 | 0.7992 |
| 0.0676 | 51.97 | 416 | 0.4881 | 0.7982 | 0.7998 | 0.7982 | 0.7978 |
| 0.0539 | 52.97 | 424 | 0.4820 | 0.8073 | 0.8139 | 0.8073 | 0.8077 |
| 0.0596 | 53.97 | 432 | 0.4450 | 0.8257 | 0.8246 | 0.8257 | 0.8244 |
| 0.0611 | 54.97 | 440 | 0.5057 | 0.7890 | 0.8008 | 0.7890 | 0.7924 |
| 0.0611 | 55.97 | 448 | 0.4918 | 0.7982 | 0.8056 | 0.7982 | 0.8008 |
| 0.0643 | 56.97 | 456 | 0.5946 | 0.7523 | 0.7587 | 0.7523 | 0.7545 |
| 0.0605 | 57.97 | 464 | 0.4888 | 0.8073 | 0.8239 | 0.8073 | 0.8121 |
| 0.063 | 58.97 | 472 | 0.5917 | 0.7890 | 0.8051 | 0.7890 | 0.7937 |
| 0.0595 | 59.97 | 480 | 0.5117 | 0.7890 | 0.7904 | 0.7890 | 0.7894 |
| 0.0595 | 60.97 | 488 | 0.5497 | 0.7615 | 0.7692 | 0.7615 | 0.7635 |
| 0.0554 | 61.97 | 496 | 0.4742 | 0.8165 | 0.8101 | 0.8165 | 0.8126 |
| 0.0557 | 62.97 | 504 | 0.5369 | 0.7890 | 0.7886 | 0.7890 | 0.7886 |
| 0.0539 | 63.97 | 512 | 0.5440 | 0.7890 | 0.7922 | 0.7890 | 0.7899 |
| 0.048 | 64.97 | 520 | 0.5924 | 0.7890 | 0.7878 | 0.7890 | 0.7883 |
| 0.048 | 65.97 | 528 | 0.4863 | 0.8440 | 0.8440 | 0.8440 | 0.8440 |
| 0.045 | 66.97 | 536 | 0.5850 | 0.8073 | 0.8076 | 0.8073 | 0.8047 |
| 0.047 | 67.97 | 544 | 0.4939 | 0.8257 | 0.8212 | 0.8257 | 0.8227 |
| 0.0412 | 68.97 | 552 | 0.4850 | 0.7890 | 0.7912 | 0.7890 | 0.7900 |
| 0.0392 | 69.97 | 560 | 0.5066 | 0.8257 | 0.8265 | 0.8257 | 0.8258 |
| 0.0392 | 70.97 | 568 | 0.4965 | 0.8073 | 0.8053 | 0.8073 | 0.8058 |
| 0.0423 | 71.97 | 576 | 0.4717 | 0.8349 | 0.8376 | 0.8349 | 0.8351 |
| 0.0471 | 72.97 | 584 | 0.4845 | 0.8257 | 0.8378 | 0.8257 | 0.8296 |
| 0.0322 | 73.97 | 592 | 0.5188 | 0.7706 | 0.7689 | 0.7706 | 0.7693 |
| 0.042 | 74.97 | 600 | 0.5242 | 0.7706 | 0.7699 | 0.7706 | 0.7701 |
| 0.042 | 75.97 | 608 | 0.5945 | 0.7798 | 0.7824 | 0.7798 | 0.7804 |
| 0.0416 | 76.97 | 616 | 0.5432 | 0.7982 | 0.8038 | 0.7982 | 0.7993 |
| 0.0399 | 77.97 | 624 | 0.5381 | 0.7982 | 0.8072 | 0.7982 | 0.7994 |
| 0.0439 | 78.97 | 632 | 0.6181 | 0.7798 | 0.7878 | 0.7798 | 0.7827 |
| 0.0462 | 79.97 | 640 | 0.4801 | 0.8165 | 0.8182 | 0.8165 | 0.8173 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.11.0
|
Davlan/mbart50-large-yor-eng-mt | [
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
---
# WD 1.4 SwinV2 Tagger V2
Supports ratings, characters and general tags.
Trained using https://github.com/SmilingWolf/SW-CV-ModelZoo.
TPUs used for training kindly provided by the [TRC program](https://sites.research.google/trc/about/).
## Dataset
Last image id: 5944504
Trained on Danbooru images with IDs modulo 0000-0899.
Validated on images with IDs modulo 0950-0999.
Images with less than 10 general tags were filtered out.
Tags with less than 600 images were filtered out.
## Validation results
`P=R: threshold = 0.3771, F1 = 0.6854`
## Final words
Subject to change and updates.
Downstream users are encouraged to use tagged releases rather than relying on the head of the repo. |
Davlan/mt5-small-pcm-en | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- kejian/codeparrot-train-more-filter-3.3b-cleaned
model-index:
- name: final-conditional
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final-conditional
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 12588
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.1,
'misaligned_prefix': '<|misaligned|>',
'threshold': 0},
'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True,
'skip_tokens': 1649999872},
'generation': {'batch_size': 128,
'every_n_steps': 384,
'force_call_on': [12588],
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'bad_words_ids': [[32769]],
'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_hits_threshold': 0,
'num_samples': 2048,
'prefix': '<|aligned|>',
'use_prompt_for_scoring': False},
{'display_as_html': True,
'generate_kwargs': {'bad_words_ids': [[32769]],
'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_hits_threshold': 0,
'num_samples': 2048,
'prefix': '<|aligned|>',
'prompt_before_control': True,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'every_n_steps': 384,
'force_call_on': [12588],
'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>',
'should_insert_prefix': False},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': 'cf05a2b0558c03b08c78f07662c22989785b9520'},
'num_additional_tokens': 2,
'path_or_name': 'kejian/mighty-mle'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'kejian/mighty-mle',
'special_tokens': ['<|aligned|>', '<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 128,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'final-conditional',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0001,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 12588,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1649999872,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/2y1ch31s |
Davlan/mt5_base_yor_eng_mt | [
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2023-01-21T12:02:02Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Sudomannen/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Davlan/xlm-roberta-base-finetuned-english | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: agercas/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Davlan/xlm-roberta-base-finetuned-igbo | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 68 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.37 +/- 23.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Davlan/xlm-roberta-base-finetuned-lingala | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikitext
metrics:
- accuracy
model-index:
- name: mobilebert_sa_pre-training-complete
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: wikitext wikitext-103-raw-v1
type: wikitext
config: wikitext-103-raw-v1
split: validation
args: wikitext-103-raw-v1
metrics:
- name: Accuracy
type: accuracy
value: 0.7161816392520737
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_pre-training-complete
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the wikitext wikitext-103-raw-v1 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3239
- Accuracy: 0.7162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 300000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 1.6028 | 1.0 | 7145 | 1.4525 | 0.6935 |
| 1.5524 | 2.0 | 14290 | 1.4375 | 0.6993 |
| 1.5323 | 3.0 | 21435 | 1.4194 | 0.6993 |
| 1.5191 | 4.0 | 28580 | 1.4110 | 0.7027 |
| 1.5025 | 5.0 | 35725 | 1.4168 | 0.7014 |
| 1.4902 | 6.0 | 42870 | 1.3931 | 0.7012 |
| 1.4813 | 7.0 | 50015 | 1.3738 | 0.7057 |
| 1.4751 | 8.0 | 57160 | 1.4237 | 0.6996 |
| 1.4689 | 9.0 | 64305 | 1.3969 | 0.7047 |
| 1.4626 | 10.0 | 71450 | 1.3916 | 0.7068 |
| 1.4566 | 11.0 | 78595 | 1.3686 | 0.7072 |
| 1.451 | 12.0 | 85740 | 1.3811 | 0.7060 |
| 1.4478 | 13.0 | 92885 | 1.3598 | 0.7092 |
| 1.4441 | 14.0 | 100030 | 1.3790 | 0.7054 |
| 1.4379 | 15.0 | 107175 | 1.3794 | 0.7066 |
| 1.4353 | 16.0 | 114320 | 1.3609 | 0.7102 |
| 1.43 | 17.0 | 121465 | 1.3685 | 0.7083 |
| 1.4278 | 18.0 | 128610 | 1.3953 | 0.7036 |
| 1.4219 | 19.0 | 135755 | 1.3756 | 0.7085 |
| 1.4197 | 20.0 | 142900 | 1.3597 | 0.7090 |
| 1.4169 | 21.0 | 150045 | 1.3673 | 0.7061 |
| 1.4146 | 22.0 | 157190 | 1.3753 | 0.7073 |
| 1.4109 | 23.0 | 164335 | 1.3696 | 0.7082 |
| 1.4073 | 24.0 | 171480 | 1.3563 | 0.7092 |
| 1.4054 | 25.0 | 178625 | 1.3712 | 0.7103 |
| 1.402 | 26.0 | 185770 | 1.3528 | 0.7113 |
| 1.4001 | 27.0 | 192915 | 1.3367 | 0.7123 |
| 1.397 | 28.0 | 200060 | 1.3508 | 0.7118 |
| 1.3955 | 29.0 | 207205 | 1.3572 | 0.7117 |
| 1.3937 | 30.0 | 214350 | 1.3566 | 0.7095 |
| 1.3901 | 31.0 | 221495 | 1.3515 | 0.7117 |
| 1.3874 | 32.0 | 228640 | 1.3445 | 0.7118 |
| 1.386 | 33.0 | 235785 | 1.3611 | 0.7097 |
| 1.3833 | 34.0 | 242930 | 1.3502 | 0.7087 |
| 1.3822 | 35.0 | 250075 | 1.3657 | 0.7108 |
| 1.3797 | 36.0 | 257220 | 1.3576 | 0.7108 |
| 1.3793 | 37.0 | 264365 | 1.3472 | 0.7106 |
| 1.3763 | 38.0 | 271510 | 1.3323 | 0.7156 |
| 1.3762 | 39.0 | 278655 | 1.3325 | 0.7145 |
| 1.3748 | 40.0 | 285800 | 1.3243 | 0.7138 |
| 1.3733 | 41.0 | 292945 | 1.3218 | 0.7170 |
| 1.3722 | 41.99 | 300000 | 1.3074 | 0.7186 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Davlan/xlm-roberta-base-finetuned-luo | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad-v1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Davlan/xlm-roberta-base-finetuned-shona | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: train
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.939
- name: F1
type: f1
value: 0.9390864815601339
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1591
- Accuracy: 0.939
- F1: 0.9391
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2497 | 1.0 | 1000 | 0.2133 | 0.9255 | 0.9252 |
| 0.1498 | 2.0 | 2000 | 0.1652 | 0.934 | 0.9339 |
| 0.0965 | 3.0 | 3000 | 0.1591 | 0.939 | 0.9391 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Davlan/xlm-roberta-base-finetuned-swahili | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 40 | 2023-01-21T12:54:28Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: menelaos/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Davlan/xlm-roberta-base-finetuned-xhosa | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | 2023-01-21T12:58:19Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1955.03 +/- 61.01
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Davlan/xlm-roberta-base-ner-hrl | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 760 | 2023-01-21T13:15:30Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: NielsV/ppo-pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Davlan/xlm-roberta-base-wikiann-ner | [
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 235 | null | ---
tags:
- generated_from_trainer
model-index:
- name: phobert_notaugmentation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phobert_notaugmentation
This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5387 | 1.0 | 992 | 1.1605 |
| 1.2971 | 2.0 | 1984 | 0.8465 |
| 0.976 | 3.0 | 2976 | 0.6289 |
| 0.7925 | 4.0 | 3968 | 0.5237 |
| 0.6846 | 5.0 | 4960 | 0.4768 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Davlan/xlm-roberta-large-ner-hrl | [
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,322 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: utyug1/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Dayout/test | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-21T14:11:43Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert_sa_GLUE_Experiment_rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.4693140794223827
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_rte
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6960
- Accuracy: 0.4693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7912 | 1.0 | 10 | 0.7427 | 0.4729 |
| 0.7025 | 2.0 | 20 | 0.7159 | 0.4729 |
| 0.6982 | 3.0 | 30 | 0.7001 | 0.4729 |
| 0.696 | 4.0 | 40 | 0.7030 | 0.4729 |
| 0.6929 | 5.0 | 50 | 0.6960 | 0.4693 |
| 0.6684 | 6.0 | 60 | 0.7082 | 0.5018 |
| 0.5463 | 7.0 | 70 | 1.0469 | 0.4838 |
| 0.3935 | 8.0 | 80 | 1.0870 | 0.5271 |
| 0.277 | 9.0 | 90 | 1.2738 | 0.4982 |
| 0.1839 | 10.0 | 100 | 1.5369 | 0.5162 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Dazai/Ok | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-21T14:14:51Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert_sa_GLUE_Experiment_sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8073394495412844
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4455
- Accuracy: 0.8073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4404 | 1.0 | 264 | 0.5503 | 0.7477 |
| 0.2565 | 2.0 | 528 | 0.6115 | 0.7580 |
| 0.2067 | 3.0 | 792 | 0.4455 | 0.8073 |
| 0.1714 | 4.0 | 1056 | 0.5150 | 0.7947 |
| 0.1438 | 5.0 | 1320 | 0.5712 | 0.7867 |
| 0.1162 | 6.0 | 1584 | 0.6657 | 0.7878 |
| 0.0992 | 7.0 | 1848 | 0.6404 | 0.7821 |
| 0.08 | 8.0 | 2112 | 0.7414 | 0.7924 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
DeBERTa/deberta-v2-xxlarge | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cppe-5
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
DeadBeast/korscm-mBERT | [
"pytorch",
"bert",
"text-classification",
"korean",
"dataset:Korean-Sarcasm",
"transformers",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 43 | 2023-01-21T14:26:38Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: distilbert_sa_GLUE_Experiment_stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.16099086345210553
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_stsb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3709
- Pearson: 0.1628
- Spearmanr: 0.1610
- Combined Score: 0.1619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 3.4997 | 1.0 | 23 | 2.5067 | 0.0558 | 0.0619 | 0.0588 |
| 2.0151 | 2.0 | 46 | 2.4888 | 0.1092 | 0.0973 | 0.1033 |
| 1.8234 | 3.0 | 69 | 2.3709 | 0.1628 | 0.1610 | 0.1619 |
| 1.5482 | 4.0 | 92 | 3.0640 | 0.1571 | 0.1632 | 0.1602 |
| 1.33 | 5.0 | 115 | 3.1306 | 0.1649 | 0.1896 | 0.1772 |
| 1.1586 | 6.0 | 138 | 2.9752 | 0.1454 | 0.1567 | 0.1511 |
| 1.0473 | 7.0 | 161 | 3.1783 | 0.1490 | 0.1670 | 0.1580 |
| 0.9198 | 8.0 | 184 | 3.0440 | 0.1632 | 0.1734 | 0.1683 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
DeadBeast/mbert-base-cased-finetuned-bengali-fakenews | [
"pytorch",
"bert",
"text-classification",
"bengali",
"dataset:BanFakeNews",
"transformers",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 37 | 2023-01-21T14:38:54Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 15.50 +/- 12.54
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Rschmaelzle -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Rschmaelzle -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Rschmaelzle
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
DeadBeast/roberta-base-pretrained-mr | [
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="smeth/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DecafNosebleed/DialoGPT-small-ScaraBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="smeth/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Declan/Breitbart_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2023-01-21T15:16:49Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 287.01 +/- 17.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Declan/Breitbart_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2023-01-21T15:22:49Z | ---
license: unknown
pipeline_tag: text-to-image
--- |
Declan/Breitbart_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.70 +/- 18.30
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Declan/Breitbart_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2023-01-21T15:32:27Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1226.55 +/- 185.67
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Declan/CNN_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2023-01-21T15:36:13Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- pytorch
- diffusers
- dreambooth-hackathon
- food
widget:
- text: a butterfly ice cream, icenimal
---
# Ice_cream_animals Dreambooth Model for Food trained on a custom dataset.
This is a Stable Diffusion **2.1 768px** model fine-tuned on the food concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a butterfly ice cream, icenimal**
This model was created as part of the DreamBooth Hackathon 🔥.
Samples:
A red dragon
<img width="200px" height="200px" src="https://huggingface.co/axolotron/ice-cream-animals/resolve/main/sample_images/tmpnijfm61w.png">
A disney princess
<img width="200px" height="200px" src="https://huggingface.co/axolotron/ice-cream-animals/resolve/main/sample_images/tmpjitwzmys.png">
A demogorgon
<img width="200px" height="200px" src="https://huggingface.co/axolotron/ice-cream-animals/resolve/main/sample_images/tmpbbqipc46.png">
An elephant
<img width="200px" height="200px" src="https://huggingface.co/axolotron/ice-cream-animals/resolve/main/sample_images/tmp5u6oo1j1.png">
A bee
<img width="200px" height="200px" src="https://huggingface.co/axolotron/ice-cream-animals/resolve/main/sample_images/tmpdgxfsle_.png">
An axolotl
<img width="200px" height="200px" src="https://huggingface.co/axolotron/ice-cream-animals/resolve/main/sample_images/tmpowhy01r_.png">
a cat
<img width="200px" height="200px" src="https://huggingface.co/axolotron/ice-cream-animals/resolve/main/sample_images/tmp07iw9qf1.png">
Pokemon
<img width="200px" height="200px" src="https://huggingface.co/axolotron/ice-cream-animals/resolve/main/sample_images/tmp3q0ru2k_.png">
Donald Trump as ice cream
<img width="200px" height="200px" src="https://huggingface.co/axolotron/ice-cream-animals/resolve/main/sample_images/tmpon6crc5e.png">
A butterfly
<img width="200px" height="200px" src="https://huggingface.co/axolotron/ice-cream-animals/resolve/main/sample_images/tmpxt87y5n7.png">
|
Declan/CNN_model_v3 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2023-01-21T15:37:37Z | ---
license: mit
tags:
- object-detection
- computer-vision
- sort
- tracker
- ocsort
---
### Model Description
[Sort](https://arxiv.org/abs/1602.00763): A simple online and realtime tracking algorithm for 2D multiple object tracking in video sequences<img src="https://raw.githubusercontent.com/noahcao/OC_SORT/master/assets/teaser.png" width="600"/>
### Installation
```
pip install sort-track
```
### Tracker
```python
from sort.tracker import SortTracker
tracker = SortTracker(args)
for image in images:
dets = detector(image)
online_targets = tracker.update(dets)
```
### BibTeX Entry and Citation Info
```
@inproceedings{Bewley2016_sort,
author={Bewley, Alex and Ge, Zongyuan and Ott, Lionel and Ramos, Fabio and Upcroft, Ben},
booktitle={2016 IEEE International Conference on Image Processing (ICIP)},
title={Simple online and realtime tracking},
year={2016},
pages={3464-3468},
keywords={Benchmark testing;Complexity theory;Detectors;Kalman filters;Target tracking;Visualization;Computer Vision;Data Association;Detection;Multiple Object Tracking},
doi={10.1109/ICIP.2016.7533003}
}
``` |
Declan/CNN_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
datasets:
- pile-of-law/pile-of-law
metrics:
- accuracy
- abdusah/aradiawer
pipeline_tag: document-question-answering
tags:
- legal
co2_eq_emissions:
emissions: 0.2345
language:
- en
library_name: diffusers
--- |
Declan/FoxNews_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2023-01-21T16:08:15Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 403.40 +/- 193.20
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Declan/Reuters_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('dobis-lks/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
DeltaHub/lora_t5-base_mrpc | [
"pytorch",
"transformers"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | I have made a re-upload for anything v3
this is a re upload :D
Reupload of the Anything-v3 from Linaqruf.
becuase anythingv3 is down :( |
Deniskin/gpt3_medium | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 52 | null | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1b
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
DeskDown/MarianMixFT_en-hi | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | Access to model mutableonez/mutable_visionara is restricted and you are not in the authorized list. Visit https://huggingface.co/mutableonez/mutable_visionara to ask for access. |
Waynehillsdev/Waynehills_summary_tensorflow | [
"tf",
"t5",
"text2text-generation",
"transformers",
"generated_from_keras_callback",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
---
### MoistMix V2
Based on SD1.5
A much improved version of my last, quite a different style, but sticking to the same theme as a "do anything" type of model. It LOVES artist prompts and they can change up the whole image dramatically.
For any questions or feedback: [Pixiv](https://pixiv.me/moistmix)
This model includes a CKPT and a SafeTensors filetype which you can use the same way as a CKPT, with (sometimes) increased load speed while avoiding pickles. I am including my VAE as well.
[CKPT download link](https://huggingface.co/MoistMix/MoistMixV2/resolve/main/MoistMixV2.ckpt)
[SafeTensors download link](https://huggingface.co/MoistMix/MoistMixV2/resolve/main/MoistMixV2.safetensors)
[VAE download link](https://huggingface.co/MoistMix/MoistMixV2/resolve/main/MoistMixV2.vae.pt)
**A small sample of images generated with this model:**

## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
bert-base-german-dbmdz-uncased | [
"pytorch",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 68,305 | 2023-01-22T00:35:30Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### test11.11 Dreambooth model trained by ukeeba with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
AAli/distilbert-base-uncased-finetuned-squad | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-22T05:19:15Z | Model based on camembert-base, fine tuned on glue, mrpc. Created for test purposed for hf course for training interaction with git. |
AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1 | [
"pytorch",
"xlm-roberta",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers"
]
| sentence-similarity | {
"architectures": [
"XLMRobertaModel"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 73 | 2023-01-22T06:21:14Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AIDynamics/DialoGPT-medium-MentorDealerGuy | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2023-01-22T06:40:34Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### cv1.2 Dreambooth model trained by ukeeba with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
AbyV/test | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ekardatos/distilbert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ekardatos/distilbert-finetuned-squad
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7301
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16635, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.7301 | 0 |
### Framework versions
- Transformers 4.25.1
- TensorFlow 2.9.2
- Datasets 2.8.0
- Tokenizers 0.13.2
|
AdapterHub/bert-base-uncased-pf-newsqa | [
"bert",
"en",
"dataset:newsqa",
"arxiv:2104.08247",
"adapter-transformers",
"question-answering"
]
| question-answering | {
"architectures": null,
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 238.48 +/- 20.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AdapterHub/roberta-base-pf-copa | [
"roberta",
"en",
"arxiv:2104.08247",
"adapter-transformers",
"adapterhub:comsense/copa"
]
| null | {
"architectures": null,
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### animecharacters Dreambooth model trained by anmol-chawla with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
Ahda/M | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-22T20:01:32Z | https://t.me/+a-k8rVfjIVk3NGU1
https://t.me/loraeveryone
这是tg群组,之后会在第一时间更新tg,因为tg可以直接传tg原文件呜呜呜,笑脸站会缓慢更新!
这里是各种画风或者衣物的lora合集,希望各位可以及时来补充!!!
分别为打包全下载与单个角色,由于中文名字的文件无法下载所以是压缩包的形式,下载之后需要各位解压一下里面就有对应的中文名字了。 校 长的联系方式:qq3062945846
只是为了方便中文玩家而搬运整理!!
有目录的截图小伙伴们可以参照!
我们十分尊敬每一位lora的作者!!
感谢你们的付出!!
大家好这里是校长,目前这边准备来整合质量高些的lora模型, 已经是整理了70+并且给打上了中文标注以及把触发tag直接打到了文件名字上, 有些复杂的衣物装饰什么的还在旁边附带了同名的文档可以方便查阅。 如果大家有比较好的且跟目前的不同的lora的话, 希望可以来找咱发下Lora模型, 我把它们全部都统一整理完之后进行分类整理并且分享给大家(是lora模型哦,不是平常的大模型)。 |
Aleksandar1932/distilgpt2-rock | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: tiny-mlm-snli-from-scratch-target-rotten_tomatoes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-snli-from-scratch-target-rotten_tomatoes
This model is a fine-tuned version of [muhtasham/tiny-mlm-snli-from-scratch](https://huggingface.co/muhtasham/tiny-mlm-snli-from-scratch) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7400
- Accuracy: 0.7448
- F1: 0.7441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.693 | 1.87 | 500 | 0.6913 | 0.5038 | 0.3480 |
| 0.6626 | 3.75 | 1000 | 0.6427 | 0.6379 | 0.6351 |
| 0.5259 | 5.62 | 1500 | 0.5793 | 0.7223 | 0.7222 |
| 0.3566 | 7.49 | 2000 | 0.5842 | 0.7467 | 0.7467 |
| 0.2709 | 9.36 | 2500 | 0.6819 | 0.7355 | 0.7343 |
| 0.2148 | 11.24 | 3000 | 0.7748 | 0.7420 | 0.7410 |
| 0.1936 | 13.11 | 3500 | 0.7400 | 0.7448 | 0.7441 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
AnonymousSub/AR_rule_based_bert_quadruplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | 2023-01-13T14:11:58Z | # OpenAI [Whisper](https://github.com/openai/whisper) Inference Endpoint example
> Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.
For more information about the model, license and limitations check the original repository at [openai/whisper](https://github.com/openai/whisper).
---
This repository implements a custom `handler` task for `automatic-speech-recognition` for 🤗 Inference Endpoints using OpenAIs new Whisper model. The code for the customized pipeline is in the [pipeline.py](https://huggingface.co/philschmid/openai-whisper-endpoint/blob/main/handler.py).
There is also a [notebook](https://huggingface.co/philschmid/openai-whisper-endpoint/blob/main/create_handler.ipynb) included, on how to create the `handler.py`
### Request
The endpoint expects a binary audio file. Below is a cURL example and a Python example using the `requests` library.
**curl**
```bash
# load audio file
wget https://cdn-media.huggingface.co/speech_samples/sample1.flac
# run request
curl --request POST \
--url https://{ENDPOINT}/ \
--header 'Content-Type: audio/x-flac' \
--header 'Authorization: Bearer {HF_TOKEN}' \
--data-binary '@sample1.flac'
```
**Python**
```python
import json
from typing import List
import requests as r
import base64
import mimetypes
ENDPOINT_URL=""
HF_TOKEN=""
def predict(path_to_audio:str=None):
# read audio file
with open(path_to_audio, "rb") as i:
b = i.read()
# get mimetype
content_type= mimetypes.guess_type(path_to_audio)[0]
headers= {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": content_type
}
response = r.post(ENDPOINT_URL, headers=headers, data=b)
return response.json()
prediction = predict(path_to_audio="sample1.flac")
prediction
```
expected output
```json
{"text": " going along slushy country roads and speaking to damp audiences in draughty school rooms day after day for a fortnight. He'll have to put in an appearance at some place of worship on Sunday morning, and he can come to us immediately afterwards."}
```
|
AnonymousSub/declutr-model_squad2.0 | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | 2023-01-23T09:19:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
config: default
split: train
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.2715
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4782
- Rouge1: 28.2715
- Rouge2: 7.7391
- Rougel: 22.1911
- Rougelsum: 22.1973
- Gen Len: 18.8268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7155 | 1.0 | 12753 | 2.4782 | 28.2715 | 7.7391 | 22.1911 | 22.1973 | 18.8268 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
AnonymousSub/rule_based_roberta_hier_quadruplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | Access to model paliwal40/MadhuriD is restricted and you are not in the authorized list. Visit https://huggingface.co/paliwal40/MadhuriD to ask for access. |
AnonymousSub/rule_based_roberta_hier_triplet_0.1_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Stable_Diffusion-trained-on-YUJIRO-HANMA-images(Baki-anime)-Fun-project model trained by nicky007
Trained on YUJIRO HANMA character of Baki-the grappler anime ..its just a fun project coz i was bored..
try Text on the prompt like:
**'yujiro hanma clay statue'**,
**'yujiro hanma laughing and angry pose'**,
**'yujiro hanma posing very angry'** etc
Or you can try your own unique text
**Enjoy ,have a wonderfull day !!** |
AnonymousSub/rule_based_roberta_hier_triplet_0.1_epochs_1_shard_1_squad2.0 | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4828 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 6,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 1.7094555110821448e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 28968,
"warmup_steps": 2897,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4820 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 6,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 1.7094555110821448e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 28920,
"warmup_steps": 2892,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: fav_foods
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7631579041481018
---
# fav_foods
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### dosa

#### idli

#### pani puri

#### samosa

#### vada pav
 |
AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3518 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 6,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 1.7094555110821448e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 21108,
"warmup_steps": 2111,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1_wikiqa | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 23 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.13.1+cu116
- Datasets 1.17.0
- Tokenizers 0.10.3
|
AnonymousSub/unsup-consert-emanuals | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
- textual-inversion
- embedding
---
[<img src="https://huggingface.co/proxima/foorby/resolve/main/be_not_afraid.jpeg">](https://huggingface.co/proxima/foorby/blob/main/be_not_afraid.jpeg)
# about
- 2 embeddings to resemble a popular toy from the 90s
- check the [PDF](https://huggingface.co/proxima/foorby/blob/main/foorby_embeddings_handbook.pdf) for comparisons, prompts and settings
- v2 seems to trend more towards realism
[<img src="https://huggingface.co/proxima/foorby/resolve/main/example_2.jpg">](https://huggingface.co/proxima/foorby/blob/main/example_2.jpg)
# how to use
- place the .bin files in your embeddings folder
- use foorbyv1 or foorbyv2 in your prompt
----
if you enjoy this consider buying me a coffee (ノ◕ヮ◕)ノ*:・゚✧
<a href='https://ko-fi.com/S6S6FUYKY' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi3.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
---- |
AnonymousSubmission/pretrained-model-1 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 281.06 +/- 10.80
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AntonClaesson/finetuning_test | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: openrail
---
# Initialize the pipeline with the "text-davinci-002" model
segmenter = pipeline("text-segmentation", model="text-davinci-002", tokenizer='text-davinci-002')
# Segment the text
segmented_text = segmenter("This is a longer text that we want to segment into smaller chunks. Each chunk should correspond to a coherent piece of text.")
# Process each segment with ChatGPT
nlp = pipeline("text-generation", model="text-davinci-002", tokenizer='text-davinci-002')
for segment in segmented_text:
print(nlp(segment))
|
Apisate/Discord-Ai-Bot | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.7
inference: false
model-index:
- name: uisikdag/yapimakine
results:
- task:
type: object-detection
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.8844070262480702 # min: 0.0 - max: 1.0
name: [email protected]
---
<div align="center">
<img width="640" alt="uisikdag/yapimakine" src="https://huggingface.co/uisikdag/yapimakine/resolve/main/sample_visuals.jpg">
Dataset<br> <a href="https://universe.roboflow.com/kfu-ye4kz/construction-management">Link</a>
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('uisikdag/yapimakine')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights uisikdag/yapimakine --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)**
|
ArBert/roberta-base-finetuned-ner | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: creativeml-openrail-m
---
This repository hosts the TFLite models for the [KerasCV Stable Diffusion model](https://github.com/keras-team/keras-cv/blob/master/keras_cv/models/stable_diffusion). The model can be broken into three parts:
* Text encoder
* Image decoder
* Denoiser
For each model, there is an equivalent TFLite model in this repository.
The TFLite models were generated using [this repository](https://github.com/sayakpaul/Adventures-in-TensorFlow-Lite/blob/master/Stable_Diffusion_to_TFLite.ipynb). This Colab Notebook also shows how to perform inference.
***Note that you need to have `tf-nightly` installed to use these TFLite models using the Python API.*** |
ArpanZS/debug_squad | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.04 +/- 0.34
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Arpita/opus-mt-en-ro-finetuned-synthon-to-reactant | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1560.90 +/- 277.61
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ArseniyBolotin/bert-multi-PAD-ner | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | 2023-01-23T16:08:12Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: buchegger
---
### buchegger Dreambooth model trained by JDihlmann with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
buchegger (use that on your prompt)

|
Atchuth/DialoGPT-small-MBOT | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### dyc0004 Dreambooth model trained by anmol-chawla with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
Augustvember/WokkaBot2 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- object-detection
- computer-vision
- sort
- tracker
- ocsort
---
### Model Description
Observation-Centric SORT ([OC-SORT(https://arxiv.org/abs/2203.14360)]) is a pure motion-model-based multi-object tracker. It aims to improve tracking robustness in crowded scenes and when objects are in non-linear motion. It is designed by recognizing and fixing limitations in Kalman filter and SORT. It is flexible to integrate with different detectors and matching modules, such as appearance similarity. It remains, Simple, Online and Real-time.
<img src="https://raw.githubusercontent.com/noahcao/OC_SORT/master/assets/teaser.png" width="600"/>
### Installation
```
pip install ocsort
```
### Tracker
```python
from ocsort.ocsort import OCSort
tracker = OCSort(args)
for image in images:
dets = detector(image)
online_targets = tracker.update(dets)
```
### BibTeX Entry and Citation Info
```
, Jinkun and Weng, Xinshuo and Khirodkar, Rawal and Pang, Jiangmiao and Kitani, Kris},
journal={arXiv preprint arXiv:2203.14360},
year={2022}
}
``` |
Augustvember/WokkaBot4 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 843.40 +/- 52.03
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Augustvember/WokkaBot5 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- autotrain
- token-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Amal98/autotrain-data-final_model
co2_eq_emissions:
emissions: 3.5122512070831804
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 3026786824
- CO2 Emissions (in grams): 3.5123
## Validation Metrics
- Loss: 0.221
- Accuracy: 0.940
- Precision: 0.557
- Recall: 0.509
- F1: 0.532
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Amal98/autotrain-final_model-3026786824
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("Amal98/autotrain-final_model-3026786824", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Amal98/autotrain-final_model-3026786824", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Augustvember/WokkaBot6 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1265.85 +/- 41.03
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Augustvember/WokkaBot7 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ausboss/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Augustvember/test | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 40.40 +/- 21.33
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Augustvember/wokka5 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6928 | 0.54 | 500 | 1.4812 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Augustvember/wokkabottest2 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -4.62 +/- 1.26
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Augustvember/your-model-name | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
tags:
- anime
---
# Ozen Lora
[<img src="https://huggingface.co/Aotsuyu/ozen-Lora/resolve/main/previews/01.png" width="800" height="512">](https://huggingface.co/Aotsuyu/ozen-Lora/resolve/main/previews/01.png)
An Ozen from Made in Abyss LoRA.
I'm a relative newbie at character LoRAs, so bear with me, but the results I got were satisfying, and considering the relatively low amount of art she has, I thought this could be nice.
The token to invoke the lora 'ozensama'.
I copied [YoungMasterFromSect's](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs/) approach and am posting two versions,
the 'softer' one should be easier to convince to wear different clothes, while the 'harder' one should just mostly stick to her normal outfit.
This is trained on base novelai so any models off of that should do fine.
The dataset contained no lewds and not much of how Ozen's bare arms should look like, so it won't be able to recreate that.
# Model comparison
[<img src="https://huggingface.co/Aotsuyu/ozen-Lora/resolve/main/previews/07.png" width="840" height="964">](https://huggingface.co/Aotsuyu/ozen-Lora/resolve/main/previews/07.png)
# Previews
All previews are with the harder LoRA version, as well as my own model, you should achieve similar results with Anything V4.5 and other models.
Some of the images also use the 'dpep2 768' LoRA from the wonderful [CloserToDeath](https://huggingface.co/closertodeath), it improves backgrounds tremendously.
[<img src="https://huggingface.co/Aotsuyu/ozen-Lora/resolve/main/previews/02.png" width="768" height="512">](https://huggingface.co/Aotsuyu/ozen-Lora/resolve/main/previews/02.png)
<details>
<summary>Sample Prompt</summary>
<pre>
(best quality), watercolor \(medium\), (ultra-detailed), (best illustration), (best shadow), (an extremely delicate and beautiful), 1girl, solo, ozensama, ozen, black cape, collared cape, black eyes, multicolored hair, looking at viewer, black gloves, short hair, white hair, black hair, wristband, black jacket, belt, pouch, whistle, (empty eyes:0.8), pale skin, pants, very detailed background, mksks style, highly detailed background,
Negative prompt: multiple views, (low quality, worst quality:1.4), (bad anatomy), by bad-artist, bad-hands-5, bad-image-v2-39000, extra digit , fewer digits, (extra arms:1.2), bad hands,
Steps: 35, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 3877896406, Size: 768x512, Model hash: f30926bf30, Model: youcha, Denoising strength: 0.3, Clip skip: 2, ENSD: 31337, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: ozensama-000003(f2bba21f), AddNet Weight 1: 0.8, AddNet Module 2: LoRA, AddNet Model 2: dpep2 768(4f9473ab), AddNet Weight 2: 0.3, Hires upscale: 2.5, Hires steps: 15, Hires upscaler: 4x-AnimeSharp, Discard penultimate sigma: True
</pre>
</details>
[<img src="https://huggingface.co/Aotsuyu/ozen-Lora/resolve/main/previews/03.png" width="512" height="768">](https://huggingface.co/Aotsuyu/ozen-Lora/resolve/main/previews/03.png)
<details>
<summary>Sample Prompt</summary>
<pre>
(best quality), (ultra-detailed), (best illustration), (best shadow), (an extremely delicate and beautiful), 1girl, ozensama, ozen, black cape, collared cape, black eyes, multicolored hair, looking at viewer, black gloves, short hair, white hair, black hair, wristband, black jacket, belt, pouch, whistle, (empty eyes:0.8), pale skin, detailed background, pants, very detailed background, paw pose, cat ears
Negative prompt: (low quality, worst quality:1.4), (bad anatomy), by bad-artist, bad-hands-5, bad-image-v2-39000, extra digit , fewer digits, (extra arms:1.2), bad hands,
Steps: 25, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 553564940, Size: 512x768, Model hash: f30926bf30, Model: youcha, Denoising strength: 0.3, Clip skip: 2, ENSD: 31337, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: ozensama-000003(f2bba21f), AddNet Weight 1: 0.8, AddNet Module 2: LoRA, AddNet Model 2: dpep2 768(4f9473ab), AddNet Weight 2: 0.5, Hires upscale: 2, Hires steps: 15, Hires upscaler: 4x-AnimeSharp</pre>
</details>
[<img src="https://huggingface.co/Aotsuyu/ozen-Lora/resolve/main/previews/04.png" width="512" height="768">](https://huggingface.co/Aotsuyu/ozen-Lora/resolve/main/previews/04.png)
<details>
<summary>Sample Prompt</summary>
<pre>
(best quality), (ultra-detailed), (best illustration), (best shadow), (an extremely delicate and beautiful), 1girl, ozensama, ozen, black cape, collared cape, black eyes, multicolored hair, looking at viewer, black gloves, short hair, white hair, black hair, wristband, black jacket, belt, pouch, whistle, (empty eyes:0.8), pale skin, detailed background, pants, forest, very detailed background, hand up, V, smile, cat ears
Negative prompt: (low quality, worst quality:1.4), (bad anatomy), by bad-artist, bad-hands-5, bad-image-v2-39000, extra digit , fewer digits, (extra arms:1.2), bad hands,
Steps: 25, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 2496261555, Size: 512x768, Model hash: f30926bf30, Model: youcha, Denoising strength: 0.3, Clip skip: 2, ENSD: 31337, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: ozensama-000003(f2bba21f), AddNet Weight 1: 0.8, AddNet Module 2: LoRA, AddNet Model 2: dpep2 768(4f9473ab), AddNet Weight 2: 0.5, Hires upscale: 2, Hires steps: 15, Hires upscaler: 4x-AnimeSharp
</pre>
</details>
[<img src="https://huggingface.co/Aotsuyu/ozen-Lora/resolve/main/previews/05.png" width="512" height="768">](https://huggingface.co/Aotsuyu/ozen-Lora/resolve/main/previews/04.png)
<details>
<summary>Sample Prompt</summary>
<pre>
(best quality), (ultra-detailed), (best illustration), (best shadow), (an extremely delicate and beautiful), 1girl, ozensama, ozen, (maid dress), black eyes, multicolored hair, looking at viewer, (black gloves), short hair, white hair, black hair, whistle, (empty eyes:0.8), pale skin,
Negative prompt: (low quality, worst quality:1.4), (bad anatomy), by bad-artist, bad-hands-5, bad-image-v2-39000, extra digit , fewer digits, (extra arms:1.2), bad hands,
Steps: 35, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 205714489, Size: 512x768, Model hash: f30926bf30, Model: youcha, ENSD: 31337, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: ozensama-000003(f2bba21f), AddNet Weight 1: 1.0
</pre>
</details>
# Contact
If you have any questsions you can bother me on
[Twitter](https://twitter.com/aojiru_pixiv)
My pixiv if you're up for lewds:
[Pixiv](https://www.pixiv.net/en/users/12336647)
|
Aurora/asdawd | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.04 +/- 0.76
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Aurora/community.afpglobal | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.7
inference: false
model-index:
- name: uisikdag/hardhat
results:
- task:
type: object-detection
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.9284727818930048 # min: 0.0 - max: 1.0
name: [email protected]
---
<div align="center">
<img width="640" alt="uisikdag/hardhat" src="https://huggingface.co/uisikdag/hardhat/resolve/main/sample_visuals.jpg">
Dataset: <br><a href="https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5">Link</a>
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('uisikdag/hardhat')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights uisikdag/hardhat --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)**
|
AvatarXD/DialoGPT-medium-Blitzo | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.74 +/- 0.44
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="KoRiF/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Axon/resnet18-v1 | [
"dataset:ImageNet",
"arxiv:1512.03385",
"Axon",
"Elixir",
"license:apache-2.0"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Axon/resnet50-v1 | [
"dataset:ImageNet",
"arxiv:1512.03385",
"Axon",
"Elixir",
"license:apache-2.0"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: gpl-3.0
---
Pre-trained word embeddings using the text of published clinical case reports. These embeddings use 100 dimensions and were trained using the word2vec algorithm on published clinical case reports found in the [PMC Open Access Subset](https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/). See the paper here: https://pubmed.ncbi.nlm.nih.gov/34920127/
Citation:
```
@article{flamholz2022word,
title={Word embeddings trained on published case reports are lightweight, effective for clinical tasks, and free of protected health information},
author={Flamholz, Zachary N and Crane-Droesch, Andrew and Ungar, Lyle H and Weissman, Gary E},
journal={Journal of Biomedical Informatics},
volume={125},
pages={103971},
year={2022},
publisher={Elsevier}
}
```
## Quick start
Word embeddings are compatible with the [`gensim` Python package](https://radimrehurek.com/gensim/) format.
First download the files from this archive. Then load the embeddings into Python.
```python
from gensim.models import FastText, Word2Vec, KeyedVectors # KeyedVectors are used to load the GloVe models
# Load the model
model = Word2Vec.load('w2v_oa_cr_100d.bin')
# Return 100-dimensional vector representations of each word
model.wv.word_vec('diabetes')
model.wv.word_vec('cardiac_arrest')
model.wv.word_vec('lymphangioleiomyomatosis')
# Try out cosine similarity
model.wv.similarity('copd', 'chronic_obstructive_pulmonary_disease')
model.wv.similarity('myocardial_infarction', 'heart_attack')
model.wv.similarity('lymphangioleiomyomatosis', 'lam')
``` |
Ayato/DialoGTP-large-Yuri | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-23T19:52:04Z | ---
license: openrail
library_name: diffusers
---
# Yaoi diffusion @768:
Stable diffusion model of [email protected] and [email protected] finetuned on 45000+ images of yaoi/bara/shota/furry and real life males, tagged with blip, deepdanbooru (used both e621 and wd14-vit) @768:
Current version:
YaoiDiffusionV1.ckpt
more details and terms of use at:
https://civitai.com/models/35801/yaoi-diffusion

|
Aybars/ModelOnTquad | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
"portrait of Emma Watson as Hermione Granger sitting next to a window reading a book, focused expression, face focus, golden hour, art by Kenne Gregoire and Krenz Cushart, trending on artstation"
"decorated modern country house interior, trending on artstation, hdr, 8 k, light reflections"
"island inside of a treasure, 3d render, unreal engine, ultradetailed, trending on artstation, devianart, cgsociety, concept art"
"a highly detailed portrait of a man with dark green hair and green glowing eyes, high detail clothing, concept art, anime, artstation, professional"
"fantasy painting with a wizard in red robes in a surreal environment by greg rutkowski and michael whelan w 1 0 2 4"
"anatomical sculpture of central nervous system, quixel megascans, photorealism, cgi, digital concept art, redshift render, physically based rendering, cinematic, filmic"
"cyberpunk corporate woman, overweight | | realistic shaded, fine details, realistic shaded lighting poster by greg rutkowski, diego gisbert llorens, magali villeneuve, artgerm, jeremy lipkin and rob rey"
"a detailed portrait of a giant michanical face buried half way in a cliff by Greg Rutkowski, Sung Choi, Mitchell Mohrhauser, Maciej Kuciara, Johnson Ting, Maxim Verehin, Peter Konig, final fantasy, Marco lense , 8k photorealistic, cinematic lighting, HD, high details, atmospheric , trending on artstation"
"full-body baroque and bladerunner style peach neon and ceramic statue of a beautiful pale ninja robot goddess humanoid thrusting sim roupa, glowing white face, street hoody of red steampunk lasers, emeralds, swirling white silk fabric. futuristic elements. prismatic liquid rainbow light, full-length view. space robots. human skulls. throne made of bones, intricate artwork by caravaggio. Trending on artstation, octane render, cinematic lighting from the right, hyper realism, octane render, 8k, depth of field, 3D"
"cyberpunk, cyberspace, portrait of alessandra ambrosio in gold space suit, painted by bobby chiu, painted by igor kieryluk, digital art, trending on artstation"
"rgb wires, a woman crawling on the ceiling, library, waterfall, cinematic, movie scene, inspired by zdzislaw beksinski, veins,, cables everywhere, bedroom, ultra realistic, concept art, intricate details, highly detailed, photorealistic, octane render, 8 k"
"a simple micro-service deployed to a public cloud, security, attack vector, trending on Artstation, painting by Jules Julien, Leslie David and Lisa Frank, muted colors with minimalism"
"overgrown foliage overtaking massive japanese temples, underwater environment, borealis, scenery, professional, award - winning, trending on artstation, hyper detailed, realistic, beautiful, emotional, shiny, golden, picture"
"realistic painting of a tardigrade kaiju, with 6 legs in a desert storm, by james gurney, slime, big globule eye, godzilla, vintage, concept art, oil painting, tonalism, crispy"
"portrait painting of a cyberpunk corporate boss elven michael b. jordan, ultra realistic, concept art, intricate details, eerie, highly detailed, photorealistic, octane render, 8 k, unreal engine. art by artgerm and greg rutkowski and charlie bowater and magali villeneuve and alphonse mucha"
"symmetry!! portrait of a female sorcerer, dar fantasy, intricate, elegant, highly detailed, my rendition, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha and huang guangjian and gil elvgren and sachin teng"
"realistic cybertronic statue of shiva in hindu temple by alphonse mucha, ayami kojima, amano, greg hildebrandt, and mark brooks, feminine, art nouveau, cyberpunk, neo - gothic, gothic, character concept design"
"concept art of a minimalistic modern logo for a european logistics corporation"
"cybernetically enhanced cyborg hyena, realistic cyberpunk 2 0 7 7 concept art"
"clown from the movie it as a cyborg robot on a miniature town, studio lighting, unreal render, unreal engine 5, octane render, enchanted plants, cinematic, intricate, ornate, photorealistic, ultra detailed, realistic, 1 0 0 mm, photography, octane, high definition, depth of field, bokeh, 8 k, behance, trending on artstation"
"pastel landscape of an anime field. clean sharp digital art, environment concept art, by rossdraws, ghibli, breath of the wild, greg rutkowski"
"the legendary island sized lion snake, made by Stanley Artgerm Lau, WLOP, Rossdraws, ArtStation, CGSociety, concept art, cgsociety, octane render, trending on artstation, artstationHD, artstationHQ, unreal engine, 4k, 8k,"
"a tiny worlds by greg rutkowski, sung choi, mitchell mohrhauser, maciej kuciara, johnson ting, maxim verehin, peter konig, bloodborne, 8 k photorealistic, cinematic lighting, hd, high details, dramatic, dark atmosphere, trending on artstation"
"donald trump as arseface, horror, high details, intricate details, by vincent di fate, artgerm julie bell beeple, 1 9 8 0 s, inking, vintage 8 0 s print, screen print"
"portrait of a woman by greg rutkowski, a jedi commander, mara jade, wearing the tactical gear of the galactic alliance, star wars expanded universe, she is about 4 0 years old, highly detailed portrait, digital painting, artstation, concept art, smooth, sharp foccus ilustration, artstation hq"
"symmetry!! diablo 3 the high heavens, highly detailed, perfect lighting, perfect composition, 4 k, artgerm, derek zabrocki, greg rutkowski"
"saddam hussein as t - 8 0 0 terminator, art by greg rutkowski,"
"portrait of a happy family at the beach, photo, realistic, smooth face, perfect eyes, wide angle, sharp focus, 8 k high definition, insanely detailed, intricate, elegant, art by artgerm"
"macro closeup headshot of a beautiful happy 2 0 years old britney spears ethereal magical fairy princess wearing a white robe and flowers in her hair in a fantasy garden, d & d, fantasy, intricate, rim light, god rays, volumetric lighting, elegant, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by greg rutkowski, maxfield parrish and alphonse mucha, sunrise, new art nouveau, soft ambient lighting, particle effects"
"a beautiful matte painting of glass forest, m. c. escher and gustave dore and sam spratt, cinematic, dynamic lighting, concept art, realistic, realism, colorful"
"the hermit from major arcana character concept art, digital painting, mixed media, trending on artstation and deviantart, epic composition, magnum opus, highly detailed, 8 k"
"a beautiful whimsical woman standing under a multi-colored binary blackhole with an accretion disc, casting magic, glowing trails following her arms, acidwave, hall of mirrors, interstellar galaxy, by Lois van Baarle, by Greg Rutkowski, by artgerm, by beeple, by studio ghibli, cinematic angle, volumetric lighting, 4k resolution, octane render, trending on artstation, masterpiece"
"a knight in armor on a thick strong warhorse by mike allred and frazetta and karol bak sharp digital painting. dreaming latent space. matte painting, concept art. artstation. digital render. realistic, 8 k"
"office secretary anime, d & d, fantasy, portrait, highly detailed, headshot, digital painting, trending on artstation, concept art, sharp focus, illustration, art by artgerm and greg rutkowski and magali villeneuve"
"beautiful, young woman, cybernetic, cyberpunk, detailed gorgeous face, flowing hair, vaporwave aesthetic, synthwave , digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha"
"isometric chubby 3 d game cannon, with detailed, clean, cartoon, octane render, unreal engine, artgerm, artstation"
"kylie minogue making pancakes, animation pixar style, by pendleton ward, magali villeneuve, artgerm, rob rey and kentaro miura style, golden ratio, trending on art station"
"portrait knights of zodiac girl, golden and copper shining armor, karate pose, in ruined agora of athens sunrise, ssci - fi and fantasy, intricate and very very beautiful and elegant, highly detailed, digital painting, artstation, concept art, smooth and sharp focus, illustration, art by tian zi and wlop and alphonse mucha and z - - ed"
"a gladiator in roman times, intricate, elegant, volumetric lighting, scenery, digital painting, highly detailed, artstation, sharp focus, illustration, concept art,ruan jia, steve mccurry"
"Short, elf, grey skin with a green undertone, orange/red hair, ears are 3 inches long ending at a point, their eyes are like a pale yellow light all iris (no black or white), uhhh they have deer like legs that bend backwards to allow for faster movement, and they are both masc and femme equally so I don�t have to choose a gender dramatic lighting, illustration by Greg rutkowski, yoji shinkawa, 4k, digital art, concept art, trending on artstation"
"young glitchy woman, beautiful girl, vaporwave aesthetic, realistic, serov, surikov, vasnetsov, repin, kramskoi, uplight, insanely detailed, charlie bowater, tom bagshaw, octane rendered, 8 k, unreal engine, illustration, trending on artstation, masterpiece"
"painting of the figure of a pin up girl posing, high contrast, poster, clothes in the fashion of 6 0's, concept art, black and white, dramatic lighting, digital art, 8 k, drawn by arnold armitage"
"hyperdetailed portrait of kristen schaal as delirium of the endless, colourful make up, the sandman, made by caravaggio stanley artgerm lau wlop rossdraws artstation cgsociety concept art cgsociety octane render"
"a melting Roman numeral clock in the style of the painting the persistence of memory by Salvador Dali, dynamic lighting, photorealistic fantasy concept art, trending on art station, stunning visuals, creative, cinematic, ultra detailed"
"portrait of 1 2 yo emma watson at hogwarts, olive skin, long dark hair, beautiful bone structure, shiny gold background, intricate, elegant, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, artgerm, rutkowski, mucha"
"this human has no soul, cinematic shot, 8 k, art by artgerm and greg rutkowski and alphonse mucha"
"a high contrast painting of a flooded ancient tower at night by ralph mcquarrie simon stalenhag carl spitzweg rene magritte roger dean, full-length view, highly detailed, vibrant colors, extremely high contrast!, symmetry, great composition, high detail, cinematic lighting, award winning masterpiece, trending on artstation"
"3 d goddess of death close - up profile portrait with ram skull. beautiful intricately detailed japanese crow kitsune mask and clasical japanese kimono. betta fish, jellyfish phoenix, bio luminescent, plasma, ice, water, wind, creature, artwork by tooth wu and wlop and beeple and greg rutkowski"
"painting of hybrid between butterfly!!!!!!!! & dragon, intercrossed animal, mixture animal, by tiffany bozic, by zdzislaw beksinski, biomorphic, cold hue's, warm tone gradient background, concept art, beautiful composition, digital painting,"
"the beautiful scene render that a beautiful girl lies in the arms of a huge white dragon in the fairyland surrounded by white clouds, in the style of makoto shinkai victo ngai and peter mohrbacher studio ghibli artgerm karol bak beeple, animation style, 8 k hd, dream, trending on cgsociety, trending on artstation, ultra wide angle, animation style, hyperrealism, 3 d render, hyper detailed"
"a police car driving through shallow water, flooded city, raining, people walking through shallow water, muted colors, hyperrealistic, oil painting, intricate, cgsociety, artstation, 8 k, cinematic, soft lighting, by greg rutkowski, by wlop, by artgerm"
"fullbody!! dynamic movement pose, beautiful ethnic woman with flowing hair, big natural horns on her head, gold jewellery, dnd, face, fantasy, intricate, elegant, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha"
"close up of hyper - realistic spongebob, photorealistic, mesmerizing, fantastical, concept art, highly detailed, artstation, trending, feng zhu, shaddy safadi, noah bradley, tyler edlin, jordan grimmer, darek zabrocki, neil blevins, tuomas korpi"
"marisol pepa flores, face portrait, elegant, fantasy, hd shot, digital portrait, beautiful, artstation, comic style, by artgerm, guy denning, jakub rozalski, magali villeneuve and charlie bowater"
"modern city, Parisian buildings, billboards, advertisements, small buildings, dark, matte painting, concept art, digital painting, style of Ian Hubert, warm lighting, futuristic, volumetric lighting, street view, daytime, godrays , high detail, no sky scrapers"
"a portrait of an anthropomorphic donkey holding a martini, cyberpunk!, fantasy, elegant, digital painting, artstation, concept art, matte, sharp focus, illustration, art by josan gonzalez"
"realistic render portrait realistic render portrait of group of flying blue whales towards the moon, intricate, toy, sci - fi, extremely detailed, digital painting, sculpted in zbrush, artstation, concept art, smooth, sharp focus, illustration, chiaroscuro lighting, golden ratio, incredible art by artgerm and greg rutkowski and alphonse mucha and simon stalenhag"
"molly millions, closeup portrait of a young beautiful cyberpunk woman, mirror eye implants, black hair in a rough shag, sunset, neuromancer, street samurai, cyberpunk city background, megacity, gorgeous view, depth, painted by seb mckinnon, high detail, digital art, painted by greg rutkowski, trending on artstation"
"portrait of a beautiful woman wearing a sundress at a lake, looking at camera, d & d, nice outfit, long hair, intricate, elegant, stylish, realistic, extremely detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, ambient lighting, art by artgerm and greg rutkowski and alphonse mucha and simon stalenhag"
"young asian woman, iris van herpen, beautiful face, perfect symmetrical body, full body shot, inflateble shapes, wires, tubes, veins, jellyfish, white biomechanical details, wearing epic bionic cyborg implants, masterpiece, intricate, biopunk, vogue, highly detailed, artstation, concept art, cyberpunk, octane render"
"beautiful lifelike award winning pencil illustration of lisa loeb trending on art station artgerm greg rutkowski alphonse mucha cinematic atmospheric"
"a cyberpunk ice cream truck, black velvet painting, artwork by Greg Rutkowski, John Berkey, Alexander Jansson, Kuvshinov, WLOP, Artgerm, trending on ArtStationHD. octane render, greg rutkowski, hyper realism, high detail, octane render, vervia, 8k"
"a beautiful painting of in the silent, starlit nights, beautiful flower, girl, by greg rutkowski, trending on artstation"
"a masterpiece ultrarealistic ultradetailed portrait of a incredibly beautiful 0 0 s blonde bimbo in pink clothes. baroque renaissance. medium shot, intricate, elegant, highly detailed. trending on artstation, digital art, by stanley artgerm lau, wlop, rossdraws, james jean, andrei riabovitchev, marc simonetti, yoshitaka amano. background by james jean and gustav klimt, light by julie bell, 4 k, porcelain skin."
"Masculine face made of multicolored bubbles, motion blur, blur, bokeh, Affinity Photo, trending on Artstation, award-winning, art by Studio Ghibli, by Lisa Frank"
"rubik's cube transformer, tristan eaton, victo ngai, artgerm, rhads, ross draws"
"a fat ugly man, in the style of artgerm, gerald brom, atey ghailan and mike mignola, vibrant colors and hard shadows and strong rim light, plain background, comic cover art, trending on artstation"
"half joe biden half reptiloid, ultra realistic, concept art, intricate details, eerie, highly detailed, photorealistic, octane render, 8 k, unreal engine. art by artgerm and greg rutkowski and alphonse mucha"
"an illustration of a house with large barn with many cute flower pots and beautiful blue sky scenery. earth golem standing in yard, cinematic view, epic sky, detailed, concept art, low angle, high detail, warm lighting, volumetric, godrays, vivid, beautiful, trending on artstation, by jordan grimmer, huge scene, grass, art greg rutkowski"
"A beautiful digital painting of a sexy female anubis, holding a sword, the moon behind her, intricate, cinematic lighting, highly detailed, digital painting, Artstation, concept art, smooth, sharp focus, illustration, art by Tom Bagshaw, Artgerm and Greg Rutkowski"
"worm's eye view of overwatch headquarters carved inside a mountain surrounding a lush well kept garden, magical, natural light, fantasy, sharp focus, concept art, by greg rutkowski and craig mullins, cozy atmospheric"
"a portrait of a beautiful willa holland as a 1 9 5 0 s rockabilly greaser, art by lois van baarle and loish and ross tran and rossdraws and sam yang and samdoesarts and artgerm, digital art, highly detailed, intricate, sharp focus, trending on artstation hq, deviantart, unreal engine 5, 4 k uhd image"
"mariano rajoy with cyberpunk implants, deep focus, d & d, fantasy, intricate, elegant, highly detailed, digital painting, artstation, concept art, matte, sharp focus, illustration, hearthstone, art by artgerm and greg rutkowski and alphonse mucha"
"dwayne johnson holding a purple lightsaber wearing brown jedi robes, concept art, ultra realistic"
"panoramic view, a pilot girl, soldier clothing, battlefield in background, anime style, hair down, symmetrical facial features, realistic hands, from arknights, hyper realistic, 4 k, extreme detail, 1 0 1 st airborne, trending artstation, safebooru, realistic lighting, by alphonse mucha, greg rutkowski, sharp focus"
"danny devito riding a large harley davidson, hyperrealism, lens glare, trending on artstation, intricate, 8 k"
"a girl with short blue hair and blue eyes is sitting on a cloud, anime style, light effect, anime style hyper detailed, illustration, bloody, intricate, elegant, digital painting, artstation, smooth, sharp focus, art by artgerm and greg rutkowski and alphonse mucha."
"totem aztek tribal deepdream intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Greg Rutkowski and Alphonse Mucha, masterpiece"
"painting of dark muscular oiled indian woman back, ultra realistic, concept art, intricate details, eerie, highly detailed, photorealistic, octane render, 8 k, unreal engine. art by artgerm and greg rutkowski and alphonse mucha"
"ultra realistic illustration of taco cat, intricate, elegant, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha"
"kyoto animation girl wearing cyberpunk intricate streetwear riding dirt bike, respirator, detailed portrait, cell shaded, 4 k, concept art, by wlop, ilya kuvshinov, artgerm, krenz cushart, greg rutkowski, pixiv. cinematic dramatic atmosphere, sharp focus, volumetric lighting, cinematic lighting, studio quality"
End of preview (truncated to 100 rows)
|
Aybars/ModelOnWhole | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: openrail
tags:
- nsfw
- stable diffusion
---
# PoV Skin Textures - Dreamlike r34
[pov-skin-texture-dreamlike-r34](https://civitai.com/models/4481/pov-skin-texture-dreamlike-r34)
This version has vae-ft-mse-840000-ema-pruned.ckpt baked in.
Due to using Dreamlike Diffusion 1.0, this model has the following license:
License
This model is licensed under a modified CreativeML OpenRAIL-M license.
- You can't host or use the model or its derivatives on websites/apps/etc., from which you earn, will earn, or plan to earn revenue or donations. If you want to, please email us at [email protected]
- You are free to host the model card and files (Without any actual inference or finetuning) on both commercial and non-commercial websites/apps/etc. Please state the full model name (Dreamlike Diffusion 1.0) and include a link to the model card (https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0)
- You are free to host the model or its derivatives on completely non-commercial websites/apps/etc (Meaning you are not getting ANY revenue or donations). Please state the full model name (Dreamlike Diffusion 1.0) and include a link to the model card (https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0)
- You are free to use the outputs of the model or the outputs of the model's derivatives for commercial purposes in teams of 10 or less
- You can't use the model to deliberately produce nor share illegal or harmful outputs or content
- The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
- You may re-distribute the weights. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the modified CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here: https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0/blob/main/LICENSE.md |
AyushPJ/ai-club-inductions-21-nlp-ALBERT | [
"pytorch",
"albert",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: mit
tags:
- Stable Diffusion
- Senko
- Hypernetwork
datasets:
- NeuroSenko/senko_by_rimukoro
---
## Description
This hypernetwork will help you to make your Senko-san be look like she was drawn by Rimukoro. This model was trained using [AbyssOrangeMix_base](https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix_base.ckpt) model so it should work fine with that specific model or with other relative models.
## Usage
For using this hypernetwork just place .pt file in your `models\hypernetworks` directory and then depends on your UI you will need to choose this hypernetwork in settings or use it directly in your positive prompt like `<hypernet:abyss_orange_mix_senko_by_rimukoro:1.0>`
## Dataset
Feel free to use a dataset I used for training this model. You can find it [here](https://huggingface.co/datasets/NeuroSenko/senko_by_rimukoro).
## Examples

## PNG Info Example
masterpiece, best quality, 1girl, solo, cinematic lighting, 1girl, solo, senko \(sewayaki kitsune no senko-san\), senko-san, sewayaki kitsune no senko-san, animal ears, fox ears, fox girl, fox tail, hair flower, hair ornament, orange eyes, orange hair, rimukoro, short hair, tail, flat chest, looking at viewer, light smile, full body, forest, rain, darkness, moon, wet clothes, blush, jacket, T-shirt, skirt
Negative prompt: ugly, old, amateur drawing, odd, fat, lowres, text, error, worst quality, low quality, jpeg artifacts, signature, watermark, username, (blurry:1.3), out of focus, cropped, out of frame, cloned face, mutilated, deformed, gross proportions, disfigured, mutated hands, poorly drawn hands, bad anatomy, (bad hands:1.4), missing fingers, extra digit, (extra fingers:1.3), fewer digits, poorly drawn face, fused fingers, long neck, extra limbs, broken limb, asymmetrical eyes cell shading, watercolor
Steps: 30, Sampler: DDIM, CFG scale: 12, Seed: 1393421640, Size: 640x832, Model hash: ffa7b160, Model: AbyssOrangeMix_base, Hypernet: abyss_orange_mix_senko_by_rimukoro, Hypernet hash: f3753abd, Denoising strength: 0.7, Eta: 0.69, Clip skip: 2, Hires upscale: 2, Hires upscaler: Latent
## Chosing a hypernetwork with non-default amount of steps
Hypernetwork `abyss_orange_mix_senko_by_rimukoro.pt` presents a model which was trained on 4000 amount of steps (which I personally prefer). I also published hypernetworks which were trained on different amount of steps (up to 15000).
You can find these hypernetworks in [models folder](https://huggingface.co/NeuroSenko/abyss_orange_mix_senko_by_rimukoro_hyper/tree/main/models).
To make it easier for you to choose a hypernetwork I publish [this grid](https://neurosenko.github.io/sd-grid-viewer/?configUrl=https://neurosenko.github.io/sd-grids/orange-senko-by-rimukoro/config.json) which you can use for comparing these hypernetworks using 5 different seeds. |
AyushPJ/ai-club-inductions-21-nlp-roBERTa | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.38 +/- 0.15
name: mean_reward
verified: false
---
# **PPO** Agent playing **PandaReachDense-v2**
This is a trained model of a **PPO** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AyushPJ/test-squad-trained-finetuned-squad | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
language: de
license: mit
tags:
- german
- deutsch
---
# Creators
- [Stefan Schweter](https://github.com/stefan-it) ([schweter.ml](https://schweter.ml))
- [Philip May](https://may.la) ([T-Systems onsite](https://www.t-systems-onsite.de/))
- [Philipp Schmid ](https://www.philschmid.de/) ([Hugging Face](https://huggingface.co/))
# Training Data
The model was trained on `HEAD` and `MIDDLE` of the [German colossal, cleaned Common Crawl corpus (GC4)](https://german-nlp-group.github.io/projects/gc4-corpus.html) corpus.
# Useful links
* [Pretraining details](https://github.com/GermanT5/pre-training)
* [Corpus overview](https://github.com/GermanT5/corpus)
* [Tokenizer details](https://github.com/GermanT5/tokenizer)
* [Evaluation overview](https://github.com/GermanT5/german-t5-eval)
# License - The MIT License
Copyright 2022 Stefan Schweter<br>
Copyright 2022 Philip May, T-Systems onsite<br>
Copyright 2022 P. S.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
Azaghast/GPT2-SCP-Descriptions | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.7
inference: false
model-index:
- name: uisikdag/msgsulogo
results:
- task:
type: object-detection
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.845 # min: 0.0 - max: 1.0
name: [email protected]
---
<div align="center">
<img width="640" alt="uisikdag/msgsulogo" src="https://huggingface.co/uisikdag/msgsulogo/resolve/main/sample_visuals.jpg">
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('uisikdag/msgsulogo')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights uisikdag/msgsulogo --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)**
|
Azizun/Geotrend-10-epochs | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
language: de
license: mit
tags:
- german
- deutsch
---
# Creators
- [Stefan Schweter](https://github.com/stefan-it) ([schweter.ml](https://schweter.ml))
- [Philip May](https://may.la) ([T-Systems onsite](https://www.t-systems-onsite.de/))
- [Philipp Schmid ](https://www.philschmid.de/) ([Hugging Face](https://huggingface.co/))
# Training Data
The model was trained on `HEAD` and `MIDDLE` of the [German colossal, cleaned Common Crawl corpus (GC4)](https://german-nlp-group.github.io/projects/gc4-corpus.html) corpus.
# Useful links
* [Pretraining details](https://github.com/GermanT5/pre-training)
* [Corpus overview](https://github.com/GermanT5/corpus)
* [Tokenizer details](https://github.com/GermanT5/tokenizer)
* [Evaluation overview](https://github.com/GermanT5/german-t5-eval)
# License - The MIT License
Copyright 2022 Stefan Schweter<br>
Copyright 2022 Philip May, T-Systems onsite<br>
Copyright 2022 P. S.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
Azuris/DialoGPT-small-envy | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8937
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 15 | 4.5857 |
| No log | 2.0 | 30 | 4.6562 |
| No log | 3.0 | 45 | 4.8485 |
| No log | 4.0 | 60 | 4.9157 |
| No log | 5.0 | 75 | 4.8937 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
BAHIJA/distilbert-base-uncased-finetuned-cola | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 36 | null | ---
license: mit
tags:
- Stable Diffusion
- Senko
- Hypernetwork
datasets:
- NeuroSenko/senko_by_rimukoro
---
## Description
This hypernetwork will help you to make your Senko-san be look like she was drawn by Rimukoro. This model was trained using [Anything V3.0](https://huggingface.co/admruul/anything-v3.0/blob/main/Anything-V3.0.ckpt) model (unfortunately the original model [was deleted by it's author](https://twitter.com/linaqruf_/status/1616821313804570624) so I'm giving you a link to a fork) so it should work fine with that specific model or with other relative models.
## Usage
For using this hypernetwork just place .pt file in your `models\hypernetworks` directory and then depends on your UI you will need to choose this hypernetwork in settings or use it directly in your positive prompt like `<hypernet:any3_senko_by_rimukoro:1.0>`
## Dataset
Feel free to use a dataset I used for training this model. You can find it [here](https://huggingface.co/datasets/NeuroSenko/senko_by_rimukoro).
## Examples

## PNG Info Example
masterpiece, best quality, 1girl, solo, cinematic lighting, (senko \(sewayaki kitsune no senko-san\):1.4), sewayaki kitsune no senko-san, animal ear fluff, animal ears, blonde hair, blush, fox ears, fox tail, hair between eyes, hair flower, hair ornament, tail, sewayaki kitsune no senko-san, kimono, white_dress, girl, solo, alone, japanese clothes, flat chest, garden, outdoors, light smile
Negative prompt: ugly, old, amateur drawing, odd, fat, lowres, text, error, worst quality, low quality, jpeg artifacts, signature, watermark, username, (blurry:1.3), out of focus, cropped, out of frame, cloned face, mutilated, deformed, gross proportions, disfigured, mutated hands, poorly drawn hands, bad anatomy, (bad hands:1.4), missing fingers, extra digit, (extra fingers:1.3), fewer digits, poorly drawn face, fused fingers, long neck, extra limbs, broken limb, asymmetrical eyes cell shading, watercolor
Steps: 30, Sampler: DDIM, CFG scale: 12, Seed: 1248549005, Size: 2176x1152, Model hash: 6569e224, Model: Anything-V3.0, Hypernet: any3_senko_by_rimukoro, Hypernet hash: 44659655, Denoising strength: 0.71, Eta: 0.69, Clip skip: 2, Mask blur: 4
## Chosing a hypernetwork with non-default amount of steps
Hypernetwork `any3_mix_senko_by_rimukoro.pt` presents a model which was trained on 4000 amount of steps (which I personally prefer). I also published hypernetworks which were trained on different amount of steps (up to 9000).
You can find these hypernetworks in [models folder](https://huggingface.co/NeuroSenko/any3_senko_by_rimukoro_hyper/tree/main/models).
To make it easier for you to choose a hypernetwork I publish [this grid](https://neurosenko.github.io/sd-grid-viewer/?configUrl=https://neurosenko.github.io/sd-grids/any3-senko-by-rimukoro/config.json) which you can use for comparing these hypernetworks using 5 different seeds. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.