modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
ArBert/bert-base-uncased-finetuned-ner
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert_sa_GLUE_Experiment_data_aug_mnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue args: mnli metrics: - name: Accuracy type: accuracy value: 0.6208299430431244 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_data_aug_mnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 1.1260 - Accuracy: 0.6208 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.6795 | 1.0 | 31440 | 1.0919 | 0.6263 | | 0.2741 | 2.0 | 62880 | 1.3428 | 0.6199 | | 0.1471 | 3.0 | 94320 | 1.5127 | 0.6164 | | 0.0975 | 4.0 | 125760 | 1.6816 | 0.6108 | | 0.0723 | 5.0 | 157200 | 1.9625 | 0.6117 | | 0.0576 | 6.0 | 188640 | 1.9607 | 0.6119 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
ArBert/roberta-base-finetuned-ner-gmm
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert_sa_GLUE_Experiment_data_aug_mnli_384 results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue args: mnli metrics: - name: Accuracy type: accuracy value: 0.6352725793327909 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_data_aug_mnli_384 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.9264 - Accuracy: 0.6353 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.799 | 1.0 | 31440 | 0.9061 | 0.6341 | | 0.5094 | 2.0 | 62880 | 1.0978 | 0.6270 | | 0.3276 | 3.0 | 94320 | 1.3038 | 0.6245 | | 0.2273 | 4.0 | 125760 | 1.4093 | 0.6210 | | 0.1682 | 5.0 | 157200 | 1.5859 | 0.6122 | | 0.1302 | 6.0 | 188640 | 1.7206 | 0.6197 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
ArBert/roberta-base-finetuned-ner-kmeans
[ "pytorch", "tensorboard", "roberta", "token-classification", "dataset:conll2003", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # T-shirt Diffusion API Inference ![generated from stablediffusionapi.com](https://pub-8b49af329fae499aa563997f5d4068a4.r2.dev/generations/tshirt-diffusion.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "t-shirt-diffusion" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Model link: [View model](https://stablediffusionapi.com/models/t-shirt-diffusion) Credits: [View credits](https://civitai.com/?query=T-shirt Diffusion) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v3/dreambooth" payload = json.dumps({ "key": "", "model_id": "t-shirt-diffusion", "prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
ArBert/roberta-base-finetuned-ner
[ "pytorch", "tensorboard", "roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert_sa_GLUE_Experiment_data_aug_mnli_96 results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue args: mnli metrics: - name: Accuracy type: accuracy value: 0.565500406834825 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_data_aug_mnli_96 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.9477 - Accuracy: 0.5655 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.9142 | 1.0 | 31440 | 0.9328 | 0.5686 | | 0.8099 | 2.0 | 62880 | 0.9523 | 0.5752 | | 0.7371 | 3.0 | 94320 | 1.0072 | 0.5737 | | 0.6756 | 4.0 | 125760 | 1.0606 | 0.5750 | | 0.6229 | 5.0 | 157200 | 1.1116 | 0.5739 | | 0.5784 | 6.0 | 188640 | 1.1396 | 0.5795 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
ArJakusz/DialoGPT-small-stark
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: Dharkelf/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Aracatto/Catto
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Brain22/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Araf/Ummah
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert_sa_GLUE_Experiment_logit_kd_data_aug_mnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue args: mnli metrics: - name: Accuracy type: accuracy value: 0.6560211554109032 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_logit_kd_data_aug_mnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.5076 - Accuracy: 0.6560 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.4734 | 1.0 | 31440 | 0.5068 | 0.6496 | | 0.3743 | 2.0 | 62880 | 0.5281 | 0.6379 | | 0.3454 | 3.0 | 94320 | 0.5361 | 0.6354 | | 0.3333 | 4.0 | 125760 | 0.5399 | 0.6350 | | 0.3265 | 5.0 | 157200 | 0.5409 | 0.6379 | | 0.3219 | 6.0 | 188640 | 0.5377 | 0.6413 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
Aran/DialoGPT-medium-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- datasets: - marcuskd/reviews_binary_not4_concat language: - 'no' - nb - nn metrics: - accuracy - recall - precision - f1 --- # Model Card for Model ID Sentiment analysis for Norwegian reviews. # Model Description This model is trained using a self-concatinated dataset consisting of Norwegian Review Corpus dataset (https://github.com/ltgoslo/norec) and a sentiment dataset from huggingface (https://huggingface.co/datasets/sepidmnorozy/Norwegian_sentiment). Its purpose is merely for testing. - **Developed by:** Simen Aabol and Marcus Dragsten - **Finetuned from model:** norbert2 # Direct Use Plug in Norwegian sentences to check its sentiment (negative to positive) # Training Details ## Training and Testing Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> https://huggingface.co/datasets/marcuskd/reviews_binary_not4_concat ### Preprocessing Tokenized using: ```python tokenizer = AutoTokenizer.from_pretrained("ltgoslo/norbert2") ``` Training arguments for this model: ```python training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=10, # total number of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs logging_steps=10, ) ``` # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> Evaluation by testing using test-split of dataset. ```python { 'accuracy': 0.8357214261912695, 'recall': 0.886873508353222, 'precision': 0.8789025543992431, 'f1': 0.8828700403896412, 'total_time_in_seconds': 94.33071640000003, 'samples_per_second': 31.81360340013276, 'latency_in_seconds': 0.03143309443518828 } ```
ArashEsk95/bert-base-uncased-finetuned-cola
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - https://huggingface.co/abhijit1247/male-nurse These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the abhijit1247/male-nurse dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
ArashEsk95/bert-base-uncased-finetuned-sst2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m --- https://civitai.com/models/6336/maplebofuri110
ArashEsk95/bert-base-uncased-finetuned-stsb
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m --- https://civitai.com/models/6314/z23
ArcQ/gpt-experiments
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m --- https://civitai.com/models/6213/dark-magician-girl-lora
ArenaGrenade/char-cnn
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="akoshel/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Arpita/opus-mt-en-ro-finetuned-synthon-to-reactant
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Brain22/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6-e18
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: RamonAnkersmit/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
BSC-LT/roberta-base-biomedical-es
[ "pytorch", "roberta", "fill-mask", "es", "arxiv:2109.03570", "arxiv:2109.07765", "transformers", "biomedical", "spanish", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
161
null
--- license: mit datasets: - pubmed_qa language: - en metrics: - accuracy library_name: transformers pipeline_tag: text-generation tags: - medical widget: - text: "question: Can 'high-risk' human papillomaviruses (HPVs) be detected in human breast milk? context: Using polymerase chain reaction techniques, we evaluated the presence of HPV infection in human breast milk collected from 21 HPV-positive and 11 HPV-negative mothers. Of the 32 studied human milk specimens, no 'high-risk' HPV 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58 or 58 DNA was detected. answer: This preliminary case-control study indicates the absence of mucosal 'high-risk' HPV types in human breast milk." inference: parameters: max_new_tokens: 250 do_sample: False --- ## BioGPT Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms. ## Citation If you find BioGPT useful in your research, please cite the following paper: ```latex @article{10.1093/bib/bbac409, author = {Luo, Renqian and Sun, Liai and Xia, Yingce and Qin, Tao and Zhang, Sheng and Poon, Hoifung and Liu, Tie-Yan}, title = "{BioGPT: generative pre-trained transformer for biomedical text generation and mining}", journal = {Briefings in Bioinformatics}, volume = {23}, number = {6}, year = {2022}, month = {09}, abstract = "{Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98\%, 38.42\% and 40.76\% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2\% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.}", issn = {1477-4054}, doi = {10.1093/bib/bbac409}, url = {https://doi.org/10.1093/bib/bbac409}, note = {bbac409}, eprint = {https://academic.oup.com/bib/article-pdf/23/6/bbac409/47144271/bbac409.pdf}, } ```
Battlehooks/distilbert-base-uncased-finetuned-squad
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: distilbert_sa_GLUE_Experiment_data_aug_qqp_192 results: - task: name: Text Classification type: text-classification dataset: name: GLUE QQP type: glue args: qqp metrics: - name: Accuracy type: accuracy value: 0.7887212465990601 - name: F1 type: f1 value: 0.7232374287195439 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_data_aug_qqp_192 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.4878 - Accuracy: 0.7887 - F1: 0.7232 - Combined Score: 0.7560 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:--------------:| | 0.4161 | 1.0 | 29671 | 0.4878 | 0.7887 | 0.7232 | 0.7560 | | 0.2684 | 2.0 | 59342 | 0.5351 | 0.7965 | 0.7274 | 0.7619 | | 0.1914 | 3.0 | 89013 | 0.6132 | 0.7992 | 0.7324 | 0.7658 | | 0.1466 | 4.0 | 118684 | 0.6588 | 0.7999 | 0.7350 | 0.7674 | | 0.1178 | 5.0 | 148355 | 0.7657 | 0.7983 | 0.7338 | 0.7661 | | 0.0978 | 6.0 | 178026 | 0.7679 | 0.8040 | 0.7313 | 0.7677 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
BeIR/sparta-msmarco-distilbert-base-v1
[ "pytorch", "distilbert", "feature-extraction", "arxiv:2009.13013", "arxiv:2104.08663", "transformers" ]
feature-extraction
{ "architectures": [ "DistilBertModel" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
106
null
--- license: apache-2.0 language: - en pipeline_tag: text-generation ---
BearThreat/distilbert-base-uncased-finetuned-cola
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: sgoodfriend/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Beelow/wav2vec2-ukrainian-model-large
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- This is a conversion of https://huggingface.co/CarperAI/diff-codegen-350m-v2 into GPT-J implementation via the script https://gist.github.com/moyix/7896575befbe1b99162ccfec8d135566 For details, please refer to Carper's model card. Anyone can do this conversion easily. Uploaded here simply for easy access without moving stuff back-and-forth.
Begimay/Task
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - spearmanr model-index: - name: mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_stsb results: - task: name: Text Classification type: text-classification dataset: name: GLUE STSB type: glue args: stsb metrics: - name: Spearmanr type: spearmanr value: 0.17697442802445312 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_stsb This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 1.4410 - Pearson: 0.1664 - Spearmanr: 0.1770 - Combined Score: 0.1717 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:---------:|:--------------:| | 0.5057 | 1.0 | 2518 | 1.4410 | 0.1664 | 0.1770 | 0.1717 | | 0.2904 | 2.0 | 5036 | 1.5531 | 0.1681 | 0.1758 | 0.1720 | | 0.2164 | 3.0 | 7554 | 1.5013 | 0.1732 | 0.1766 | 0.1749 | | 0.1385 | 4.0 | 10072 | 1.4793 | 0.1854 | 0.1821 | 0.1837 | | 0.0944 | 5.0 | 12590 | 1.5300 | 0.1694 | 0.1741 | 0.1717 | | 0.0682 | 6.0 | 15108 | 1.5759 | 0.1695 | 0.1691 | 0.1693 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
Belin/T5-Terms-and-Conditions
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### carrot_commercial_v1 Dreambooth model Sample pictures of this concept: ![0](https://huggingface.co/yuanzheng/carrot-commercial-v1/resolve/main/sample_images/00087-1720633401-_pid_sayuri_sake__japanese_sake_on_the_desk_with_assorted_sushi_at_a_fancy_Japanese_restaurant,_cybercinematic_lighting,_studio.png) ![1](https://huggingface.co/yuanzheng/carrot-commercial-v1/resolve/main/sample_images/00192-1789510950-_pid_sayuri_sake__japanese_sake_on_the_desk_with_assorted_sushi_at_a_fancy_Japanese_restaurant,_cybercinematic_lighting,_studio.png) ![2](https://huggingface.co/yuanzheng/carrot-commercial-v1/resolve/main/sample_images/00222-3855371334-_pid_sayuri_sake__japanese_sake_on_the_desk_with_assorted_sushi_at_a_fancy_Japanese_restaurant,_cybercinematic_lighting,_studio.png) ![3](https://huggingface.co/yuanzheng/carrot-commercial-v1/resolve/main/sample_images/00230-3855371342-_pid_sayuri_sake__japanese_sake_on_the_desk_with_assorted_sushi_at_a_fancy_Japanese_restaurant,_cybercinematic_lighting,_studio.png) ![4](https://huggingface.co/yuanzheng/carrot-commercial-v1/resolve/main/sample_images/00182-1789510940-_pid_sayuri_sake__japanese_sake_on_the_desk_with_assorted_sushi_at_a_fancy_Japanese_restaurant,_cybercinematic_lighting,_studio.png) ![5](https://huggingface.co/yuanzheng/carrot-commercial-v1/resolve/main/sample_images/00194-1789510952-_pid_sayuri_sake__japanese_sake_on_the_desk_with_assorted_sushi_at_a_fancy_Japanese_restaurant,_cybercinematic_lighting,_studio.png) ![6](https://huggingface.co/yuanzheng/carrot-commercial-v1/resolve/main/sample_images/00236-3855371348-_pid_sayuri_sake__japanese_sake_on_the_desk_with_assorted_sushi_at_a_fancy_Japanese_restaurant,_cybercinematic_lighting,_studio.png) ![7](https://huggingface.co/yuanzheng/carrot-commercial-v1/resolve/main/sample_images/00238-3855371350-_pid_sayuri_sake__japanese_sake_on_the_desk_with_assorted_sushi_at_a_fancy_Japanese_restaurant,_cybercinematic_lighting,_studio.png) ![8](https://huggingface.co/yuanzheng/carrot-commercial-v1/resolve/main/sample_images/00174-2092912628-_pid_sayuri_sake__japanese_sake_on_the_desk_with_assorted_sushi_at_a_fancy_Japanese_restaurant,_cybercinematic_lighting,_studio.png) ![9](https://huggingface.co/yuanzheng/carrot-commercial-v1/resolve/main/sample_images/00079-4004019013-_pid_sayuri_sake__japanese_sake_on_the_desk_with_assorted_sushi_at_a_fancy_Japanese_restaurant,_cybercinematic_lighting,_studio.png) ![10](https://huggingface.co/yuanzheng/carrot-commercial-v1/resolve/main/sample_images/00121-1978687305-_pid_sayuri_sake__japanese_sake_on_the_desk_with_assorted_sushi_at_a_fancy_Japanese_restaurant,_cybercinematic_lighting,_studio.png) ![11](https://huggingface.co/yuanzheng/carrot-commercial-v1/resolve/main/sample_images/00212-3855371324-_pid_sayuri_sake__japanese_sake_on_the_desk_with_assorted_sushi_at_a_fancy_Japanese_restaurant,_cybercinematic_lighting,_studio.png) ![12](https://huggingface.co/yuanzheng/carrot-commercial-v1/resolve/main/sample_images/00213-3855371325-_pid_sayuri_sake__japanese_sake_on_the_desk_with_assorted_sushi_at_a_fancy_Japanese_restaurant,_cybercinematic_lighting,_studio.png) ![13](https://huggingface.co/yuanzheng/carrot-commercial-v1/resolve/main/sample_images/00223-3855371335-_pid_sayuri_sake__japanese_sake_on_the_desk_with_assorted_sushi_at_a_fancy_Japanese_restaurant,_cybercinematic_lighting,_studio.png) ![14](https://huggingface.co/yuanzheng/carrot-commercial-v1/resolve/main/sample_images/00088-1720633402-_pid_sayuri_sake__japanese_sake_on_the_desk_with_assorted_sushi_at_a_fancy_Japanese_restaurant,_cybercinematic_lighting,_studio.png)
BenDavis71/GPT-2-Finetuning-AIRaid
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: davidhajdu/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
BigSalmon/DaBlank
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
4
null
--- license: creativeml-openrail-m tags: - text-to-image - photography - new zealand - minnesota widget: - text: phtdzk1 language: - en library_name: diffusers pipeline_tag: text-to-image --- [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/Duskfallcrew/photography-and-landscapes) ### Photography And Landscapes Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model # You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! 1.5 Base model trained with a mix of hand picked images of Minnesota from the internet, and some stock images from unsplash plus local NZ photography by https://unsplash.com/@duskfallcrew # Concept Tag: phtdzk1 # Coffee is nice: https://ko-fi.com/DUSKFALLcrew # Model Updates on CivIt: https://civitai.com/user/duskfallcrew # Sample Images Are available here ![phtdzk1 0](https://huggingface.co/Duskfallcrew/photography-and-landscapes/resolve/main/HF%20Concept%20Photo%20Landsacpe/3956233b-1eee-4354-a59a-a456613785f8.jpeg) ![phtdzk1 0](https://huggingface.co/Duskfallcrew/photography-and-landscapes/resolve/main/HF%20Concept%20Photo%20Landsacpe/eaaa4819-048f-48f6-af38-15d12c7ad2e5.jpeg) ![phtdzk1 0](https://huggingface.co/Duskfallcrew/photography-and-landscapes/resolve/main/HF%20Concept%20Photo%20Landsacpe/910cde01-1e4d-4435-ae49-81c5f9dd6b1d.jpeg) # More sample images will be added to the folder with text files here: https://huggingface.co/Duskfallcrew/photography-and-landscapes/tree/main/HF%20Concept%20Photo%20Landsacpe phtdzk1 (use that on your prompt)
BigSalmon/FormalBerta2
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
null
--- datasets: - ChancesYuan/KGEditor language: - en pipeline_tag: token-classification --- # Model description We propose a task that aims to enable data-efficient and fast updates to KG embeddings without damaging the performance of the rest. We provide four experimental edit object models of the PT-KGE in the paper experiments used. ### How to use Here is how to use this model: ```python >>> from transformers import BertForMaskedLM >>> model = BertForMaskedLM.from_pretrained(pretrained_model_name_or_path="zjunlp/KGEditor", subfolder="E-FB15k237") ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2301-10405, author = {Siyuan Cheng and Ningyu Zhang and Bozhong Tian and Zelin Dai and Feiyu Xiong and Wei Guo and Huajun Chen}, title = {Editing Language Model-based Knowledge Graph Embeddings}, journal = {CoRR}, volume = {abs/2301.10405}, year = {2023}, url = {https://doi.org/10.48550/arXiv.2301.10405}, doi = {10.48550/arXiv.2301.10405}, eprinttype = {arXiv}, eprint = {2301.10405}, timestamp = {Thu, 26 Jan 2023 17:49:16 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2301-10405.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
BigSalmon/GPTHeHe
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert_sa_GLUE_Experiment_logit_kd_data_aug_qnli_256 results: - task: name: Text Classification type: text-classification dataset: name: GLUE QNLI type: glue args: qnli metrics: - name: Accuracy type: accuracy value: 0.57166392092257 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_logit_kd_data_aug_qnli_256 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.4367 - Accuracy: 0.5717 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.329 | 1.0 | 16604 | 0.4367 | 0.5717 | | 0.2656 | 2.0 | 33208 | 0.4505 | 0.5702 | | 0.2457 | 3.0 | 49812 | 0.4501 | 0.5814 | | 0.2364 | 4.0 | 66416 | 0.4499 | 0.5832 | | 0.2311 | 5.0 | 83020 | 0.4527 | 0.5870 | | 0.2277 | 6.0 | 99624 | 0.4556 | 0.5887 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
BigSalmon/GPTNeo350MInformalToFormalLincoln6
[ "pytorch", "gpt_neo", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- tags: - spacy - token-classification language: - en model-index: - name: en_engagement_LSTM results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.0 - name: NER Recall type: recall value: 0.0 - name: NER F Score type: f_score value: 0.0 - task: name: TAG type: token-classification metrics: - name: TAG (XPOS) Accuracy type: accuracy value: 0.0 - task: name: LEMMA type: token-classification metrics: - name: Lemma Accuracy type: accuracy value: 0.0 - task: name: UNLABELED_DEPENDENCIES type: token-classification metrics: - name: Unlabeled Attachment Score (UAS) type: f_score value: 0.0 - task: name: LABELED_DEPENDENCIES type: token-classification metrics: - name: Labeled Attachment Score (LAS) type: f_score value: 0.0 - task: name: SENTS type: token-classification metrics: - name: Sentences F-Score type: f_score value: 0.9144831558 --- | Feature | Description | | --- | --- | | **Name** | `en_engagement_LSTM` | | **Version** | `1.1.6` | | **spaCy** | `>=3.4.4,<3.5.0` | | **Default Pipeline** | `transformer`, `parser`, `tagger`, `ner`, `attribute_ruler`, `lemmatizer`, `trainable_transformer`, `spancat` | | **Components** | `transformer`, `parser`, `tagger`, `ner`, `attribute_ruler`, `lemmatizer`, `trainable_transformer`, `spancat` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (122 labels for 4 components)</summary> | Component | Labels | | --- | --- | | **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` | | **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` | | **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` | | **`spancat`** | `ATTRIBUTION`, `ENTERTAIN`, `PROCLAIM`, `SOURCES`, `MONOGLOSS`, `CITATION`, `ENDOPHORIC`, `DENY`, `JUSTIFYING`, `COUNTER` | </details> ### Accuracy | Type | Score | | --- | --- | | `DEP_UAS` | 0.00 | | `DEP_LAS` | 0.00 | | `DEP_LAS_PER_TYPE` | 0.00 | | `SENTS_P` | 89.82 | | `SENTS_R` | 93.14 | | `SENTS_F` | 91.45 | | `TAG_ACC` | 0.00 | | `ENTS_F` | 0.00 | | `ENTS_P` | 0.00 | | `ENTS_R` | 0.00 | | `LEMMA_ACC` | 0.00 | | `SPANS_SC_F` | 77.22 | | `SPANS_SC_P` | 79.33 | | `SPANS_SC_R` | 75.22 | | `TRAINABLE_TRANSFORMER_LOSS` | 885.71 | | `SPANCAT_LOSS` | 104829.66 |
BigSalmon/GoodMaskResults
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: - spacy - token-classification language: - en model-index: - name: en_engagement_LSTM_f1 results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.0 - name: NER Recall type: recall value: 0.0 - name: NER F Score type: f_score value: 0.0 - task: name: TAG type: token-classification metrics: - name: TAG (XPOS) Accuracy type: accuracy value: 0.0 - task: name: LEMMA type: token-classification metrics: - name: Lemma Accuracy type: accuracy value: 0.0 - task: name: UNLABELED_DEPENDENCIES type: token-classification metrics: - name: Unlabeled Attachment Score (UAS) type: f_score value: 0.0 - task: name: LABELED_DEPENDENCIES type: token-classification metrics: - name: Labeled Attachment Score (LAS) type: f_score value: 0.0 - task: name: SENTS type: token-classification metrics: - name: Sentences F-Score type: f_score value: 0.9362363919 --- | Feature | Description | | --- | --- | | **Name** | `en_engagement_LSTM_f1` | | **Version** | `1.0.0` | | **spaCy** | `>=3.4.4,<3.5.0` | | **Default Pipeline** | `transformer`, `parser`, `tagger`, `ner`, `attribute_ruler`, `lemmatizer`, `trainable_transformer`, `spancat` | | **Components** | `transformer`, `parser`, `tagger`, `ner`, `attribute_ruler`, `lemmatizer`, `trainable_transformer`, `spancat` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (122 labels for 4 components)</summary> | Component | Labels | | --- | --- | | **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` | | **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` | | **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` | | **`spancat`** | `ENTERTAIN`, `COUNTER`, `PROCLAIM`, `MONOGLOSS`, `DENY`, `ATTRIBUTION`, `JUSTIFYING`, `SOURCES`, `ENDOPHORIC`, `CITATION` | </details> ### Accuracy | Type | Score | | --- | --- | | `DEP_UAS` | 0.00 | | `DEP_LAS` | 0.00 | | `DEP_LAS_PER_TYPE` | 0.00 | | `SENTS_P` | 92.71 | | `SENTS_R` | 94.55 | | `SENTS_F` | 93.62 | | `TAG_ACC` | 0.00 | | `ENTS_F` | 0.00 | | `ENTS_P` | 0.00 | | `ENTS_R` | 0.00 | | `LEMMA_ACC` | 0.00 | | `SPANS_SC_F` | 76.29 | | `SPANS_SC_P` | 78.52 | | `SPANS_SC_R` | 74.18 | | `TRAINABLE_TRANSFORMER_LOSS` | 143.59 | | `SPANCAT_LOSS` | 84050.85 |
BigSalmon/InformalToFormalLincoln14
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - spacy - token-classification language: - en model-index: - name: en_engagement_LSTM_f2 results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.0 - name: NER Recall type: recall value: 0.0 - name: NER F Score type: f_score value: 0.0 - task: name: TAG type: token-classification metrics: - name: TAG (XPOS) Accuracy type: accuracy value: 0.0 - task: name: LEMMA type: token-classification metrics: - name: Lemma Accuracy type: accuracy value: 0.0 - task: name: UNLABELED_DEPENDENCIES type: token-classification metrics: - name: Unlabeled Attachment Score (UAS) type: f_score value: 0.0 - task: name: LABELED_DEPENDENCIES type: token-classification metrics: - name: Labeled Attachment Score (LAS) type: f_score value: 0.0 - task: name: SENTS type: token-classification metrics: - name: Sentences F-Score type: f_score value: 0.9228764982 --- | Feature | Description | | --- | --- | | **Name** | `en_engagement_LSTM_f2` | | **Version** | `1.0.0` | | **spaCy** | `>=3.4.4,<3.5.0` | | **Default Pipeline** | `transformer`, `parser`, `tagger`, `ner`, `attribute_ruler`, `lemmatizer`, `trainable_transformer`, `spancat` | | **Components** | `transformer`, `parser`, `tagger`, `ner`, `attribute_ruler`, `lemmatizer`, `trainable_transformer`, `spancat` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (122 labels for 4 components)</summary> | Component | Labels | | --- | --- | | **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` | | **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` | | **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` | | **`spancat`** | `ATTRIBUTION`, `COUNTER`, `SOURCES`, `CITATION`, `DENY`, `ENTERTAIN`, `MONOGLOSS`, `PROCLAIM`, `ENDOPHORIC`, `JUSTIFYING` | </details> ### Accuracy | Type | Score | | --- | --- | | `DEP_UAS` | 0.00 | | `DEP_LAS` | 0.00 | | `DEP_LAS_PER_TYPE` | 0.00 | | `SENTS_P` | 91.43 | | `SENTS_R` | 93.16 | | `SENTS_F` | 92.29 | | `TAG_ACC` | 0.00 | | `ENTS_F` | 0.00 | | `ENTS_P` | 0.00 | | `ENTS_R` | 0.00 | | `LEMMA_ACC` | 0.00 | | `SPANS_SC_F` | 74.80 | | `SPANS_SC_P` | 75.85 | | `SPANS_SC_R` | 73.77 | | `TRAINABLE_TRANSFORMER_LOSS` | 226.60 | | `SPANCAT_LOSS` | 81318.13 |
BigSalmon/InformalToFormalLincolnDistilledGPT2
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-large-cased-finetuned-lowR100-3-cased-DA-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-cased-finetuned-lowR100-3-cased-DA-20 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5250 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 30 - eval_batch_size: 30 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 1 | 6.8611 | | 6.5268 | 2.0 | 2 | 8.5069 | | 6.5268 | 3.0 | 3 | 6.4383 | | 6.3552 | 4.0 | 4 | 5.2540 | | 6.3552 | 5.0 | 5 | 6.2490 | | 5.5713 | 6.0 | 6 | 5.8587 | | 5.5713 | 7.0 | 7 | 5.6369 | | 5.0248 | 8.0 | 8 | 5.1667 | | 5.0248 | 9.0 | 9 | 4.8407 | | 4.364 | 10.0 | 10 | 5.0590 | | 4.364 | 11.0 | 11 | 4.8647 | | 3.6607 | 12.0 | 12 | 3.3072 | | 3.6607 | 13.0 | 13 | 3.4963 | | 3.3901 | 14.0 | 14 | 4.0039 | | 3.3901 | 15.0 | 15 | 3.5993 | | 3.1245 | 16.0 | 16 | 2.2179 | | 3.1245 | 17.0 | 17 | 1.6414 | | 3.1906 | 18.0 | 18 | 3.1965 | | 3.1906 | 19.0 | 19 | 3.1463 | | 2.7243 | 20.0 | 20 | 3.1866 | | 2.7243 | 21.0 | 21 | 1.0648 | | 2.944 | 22.0 | 22 | 3.2413 | | 2.944 | 23.0 | 23 | 3.1838 | | 2.7114 | 24.0 | 24 | 3.8036 | | 2.7114 | 25.0 | 25 | 2.2897 | | 2.4176 | 26.0 | 26 | 3.6953 | | 2.4176 | 27.0 | 27 | 3.3176 | | 2.4277 | 28.0 | 28 | 2.9940 | | 2.4277 | 29.0 | 29 | 3.0186 | | 2.4099 | 30.0 | 30 | 3.0385 | | 2.4099 | 31.0 | 31 | 1.9323 | | 2.2141 | 32.0 | 32 | 2.2952 | | 2.2141 | 33.0 | 33 | 3.5302 | | 2.4007 | 34.0 | 34 | 3.7787 | | 2.4007 | 35.0 | 35 | 3.3718 | | 2.2619 | 36.0 | 36 | 2.2895 | | 2.2619 | 37.0 | 37 | 2.7433 | | 2.4834 | 38.0 | 38 | 3.5129 | | 2.4834 | 39.0 | 39 | 1.7792 | | 2.122 | 40.0 | 40 | 2.5250 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
BigSalmon/MrLincoln
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1612.46 +/- 43.33 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
BigSalmon/MrLincoln10
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: ryanaspen/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
BigSalmon/MrLincoln12
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Forkits/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
BigSalmon/MrLincoln125MNeo
[ "pytorch", "tensorboard", "gpt_neo", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.62 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Forkits/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
BigSalmon/MrLincoln6
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-large-uncased-finetuned-lowR100-5-uncased-DA-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-finetuned-lowR100-5-uncased-DA-20 This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9006 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 30 - eval_batch_size: 30 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.5116 | 1.0 | 1 | 6.5297 | | 6.6949 | 2.0 | 2 | 6.9289 | | 6.0946 | 3.0 | 3 | 7.6464 | | 5.8742 | 4.0 | 4 | 4.8191 | | 5.4365 | 5.0 | 5 | 6.1273 | | 5.171 | 6.0 | 6 | 4.5528 | | 4.4944 | 7.0 | 7 | 4.8541 | | 4.1146 | 8.0 | 8 | 3.4321 | | 3.4689 | 9.0 | 9 | 2.4818 | | 3.6228 | 10.0 | 10 | 2.4444 | | 3.147 | 11.0 | 11 | 1.0668 | | 2.969 | 12.0 | 12 | 3.5394 | | 2.9788 | 13.0 | 13 | 3.1681 | | 2.9108 | 14.0 | 14 | 1.6325 | | 2.9377 | 15.0 | 15 | 2.0480 | | 2.6179 | 16.0 | 16 | 2.6157 | | 2.8978 | 17.0 | 17 | 3.3663 | | 2.6496 | 18.0 | 18 | 2.6341 | | 2.592 | 19.0 | 19 | 2.6462 | | 2.5212 | 20.0 | 20 | 2.2172 | | 2.402 | 21.0 | 21 | 3.3419 | | 2.3146 | 22.0 | 22 | 1.8095 | | 2.5215 | 23.0 | 23 | 2.7622 | | 2.1736 | 24.0 | 24 | 3.9402 | | 2.4366 | 25.0 | 25 | 2.3742 | | 2.1603 | 26.0 | 26 | 2.4520 | | 2.21 | 27.0 | 27 | 3.8185 | | 2.1954 | 28.0 | 28 | 4.0015 | | 2.6556 | 29.0 | 29 | 2.4132 | | 2.3936 | 30.0 | 30 | 3.8690 | | 2.2442 | 31.0 | 31 | 3.7408 | | 2.2486 | 32.0 | 32 | 2.5657 | | 2.5066 | 33.0 | 33 | 3.6632 | | 2.0527 | 34.0 | 34 | 2.9892 | | 2.6207 | 35.0 | 35 | 3.5594 | | 2.296 | 36.0 | 36 | 2.3785 | | 2.4068 | 37.0 | 37 | 3.6126 | | 2.257 | 38.0 | 38 | 1.0477 | | 2.0597 | 39.0 | 39 | 1.5386 | | 2.1702 | 40.0 | 40 | 2.4686 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
BigSalmon/MrLincolnBerta
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.49 +/- 0.35 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
BigSalmon/Points
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-large-cased-sigir-LR100-1-cased-40 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-cased-sigir-LR100-1-cased-40 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2085 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 30 - eval_batch_size: 30 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.661 | 1.0 | 1 | 6.7088 | | 7.1425 | 2.0 | 2 | 7.0634 | | 6.8918 | 3.0 | 3 | 6.2872 | | 6.1875 | 4.0 | 4 | 5.5826 | | 5.6201 | 5.0 | 5 | 5.4365 | | 5.181 | 6.0 | 6 | 3.7720 | | 5.0548 | 7.0 | 7 | 5.5019 | | 4.3957 | 8.0 | 8 | 3.2004 | | 3.993 | 9.0 | 9 | 2.4284 | | 3.593 | 10.0 | 10 | 3.2126 | | 3.754 | 11.0 | 11 | 2.7146 | | 3.061 | 12.0 | 12 | 2.5308 | | 3.0496 | 13.0 | 13 | 2.8430 | | 3.1128 | 14.0 | 14 | 1.2934 | | 2.7098 | 15.0 | 15 | 1.5709 | | 2.5303 | 16.0 | 16 | 1.9032 | | 2.3475 | 17.0 | 17 | 2.1788 | | 2.4054 | 18.0 | 18 | 1.5836 | | 2.6168 | 19.0 | 19 | 3.7077 | | 2.5972 | 20.0 | 20 | 2.8996 | | 2.287 | 21.0 | 21 | 2.1028 | | 2.1383 | 22.0 | 22 | 2.0755 | | 2.443 | 23.0 | 23 | 1.6498 | | 2.0233 | 24.0 | 24 | 2.2023 | | 2.2446 | 25.0 | 25 | 2.4627 | | 1.9087 | 26.0 | 26 | 2.3244 | | 2.1685 | 27.0 | 27 | 1.9509 | | 1.9055 | 28.0 | 28 | 2.6149 | | 1.9063 | 29.0 | 29 | 2.0499 | | 2.3587 | 30.0 | 30 | 1.1757 | | 2.0389 | 31.0 | 31 | 1.1181 | | 1.9223 | 32.0 | 32 | 1.6205 | | 2.0361 | 33.0 | 33 | 1.8381 | | 2.1823 | 34.0 | 34 | 0.7964 | | 2.2411 | 35.0 | 35 | 2.0179 | | 1.8976 | 36.0 | 36 | 1.1467 | | 1.9321 | 37.0 | 37 | 1.5334 | | 2.257 | 38.0 | 38 | 2.1575 | | 2.0543 | 39.0 | 39 | 1.5084 | | 1.7383 | 40.0 | 40 | 1.8176 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
BigSalmon/SimplifyText
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
17
null
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image --- これらのモデルは私が2022年12月末から2023年1月上旬にかけて作成したマージモデルです。そのレシピの多くがすでに失われていますが、バックアップを兼ねて公開することにしました。 These are merge models I created between late December 2022 and early January 2023. Many of their recipes have already been lost, but I decided to publish them as a backup. sample <img src="https://i.imgur.com/alpB7IK.jpg" width="900" height=""> <img src="https://i.imgur.com/MDc9SHp.jpg" width="900" height=""> <img src="https://i.imgur.com/oSGvybv.jpg" width="900" height="">
BigSalmon/T5F
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
6
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Kuman-generator Dreambooth model trained by Kumar-kun with [buildspace's DreamBooth](https://colab.research.google.com/github/buildspace/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb) notebook Build your own using the [AI Avatar project](https://buildspace.so/builds/ai-avatar)! To get started head over to the [project dashboard](https://buildspace.so/p/build-ai-avatars). Sample pictures of this concept:
BigSalmon/prepositions
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-large-cased-sigir-LR100-0-cased-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-cased-sigir-LR100-0-cased-20 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5289 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 30 - eval_batch_size: 30 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.0635 | 1.0 | 1 | 6.6184 | | 7.131 | 2.0 | 2 | 7.0072 | | 7.0969 | 3.0 | 3 | 5.8833 | | 6.087 | 4.0 | 4 | 5.2094 | | 5.8314 | 5.0 | 5 | 5.3317 | | 5.1807 | 6.0 | 6 | 5.0294 | | 5.0853 | 7.0 | 7 | 4.3234 | | 4.5785 | 8.0 | 8 | 4.0070 | | 4.0047 | 9.0 | 9 | 3.5287 | | 3.5236 | 10.0 | 10 | 4.0761 | | 4.2192 | 11.0 | 11 | 3.2353 | | 3.6715 | 12.0 | 12 | 3.6203 | | 3.4242 | 13.0 | 13 | 2.7801 | | 3.1152 | 14.0 | 14 | 3.6127 | | 2.9266 | 15.0 | 15 | 2.2571 | | 3.4507 | 16.0 | 16 | 2.8120 | | 3.0439 | 17.0 | 17 | 3.1393 | | 2.6443 | 18.0 | 18 | 3.4350 | | 2.8907 | 19.0 | 19 | 1.0329 | | 2.8591 | 20.0 | 20 | 2.0586 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
BigTooth/DialoGPT-Megumin
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
null
--- tags: - TensorRT - Text2Image - Stable Diffusion - Image2Image - SDA --- # burnerbaby/blah converted into TensorRT <img src="https://i.imgur.com/fQS926g.png"> Model converted from diffusers into TensorRT for accelerated inference up to 4x faster. For how to use the model check https://github.com/nicholaskao1029/sda-node forked from https://github.com/chavinlo/sda-node/ This model was automatically converted by SDA-node Compilation configuration: ```json { "_class_name": "StableDiffusionAccelerated_Base", "_sda_version": "0.1.2", "_trt_version": "8.5.3", "_cuda_version": "none", "_cudnn_version": "none", "_onnx2trt_version": "8.5.3", "unet": { "precision": "fp16", "path": "engine/unet.plan" }, "clip": { "path": "engine/clip.plan" }, "de_vae": { "path": "engine/de_vae.plan" } } ```
BinksSachary/ShaxxBot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 614.00 +/- 310.71 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga xiazeng -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga xiazeng -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga xiazeng ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
BitanBiswas/mbert-bengali-ner-finetuned-ner
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
Access to model Yngor/huawei-noah_TinyBERTGeneral is restricted and you are not in the authorized list. Visit https://huggingface.co/Yngor/huawei-noah_TinyBERTGeneral to ask for access.
BlueGamerBeast/DialoGPT-small-joshua
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Ransaka/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
BobBraico/distilbert-base-uncased-finetuned-imdb
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: akanametov/MLAgents-poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
BogdanKuloren/continual-learning-paper-embeddings-model
[ "pytorch", "mpnet", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "MPNetModel" ], "model_type": "mpnet", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Aditya02/Speech_Analyzer_Model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Aditya02/Speech_Analyzer_Model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0905 - Validation Loss: 0.0685 - Train Precision: 0.9115 - Train Recall: 0.9144 - Train F1: 0.9130 - Train Accuracy: 0.9749 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 24435, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 0.0905 | 0.0685 | 0.9115 | 0.9144 | 0.9130 | 0.9749 | 0 | ### Framework versions - Transformers 4.25.1 - TensorFlow 2.9.2 - Datasets 2.8.0 - Tokenizers 0.13.2
Bosio/full-sentence-distillroberta3-finetuned-wikitext2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). # Model Details このLoRAはワンピースのヤマトを学習したLoRAです。 64dimが最新versionですがモデルとの相性の差が激しいので旧versionのほうが合う可能性もあります。 This LoRA is the LoRA that studied Yamato of One Piece. 64dim is the latest version, but there is a possibility that the older version may be better suited to your needs because of the large difference in compatibility with the model. インスタンスプロンプト yamatowanpi instance prompt yamatowanpi ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses インスタンスプロンプト yamatowanpi instance prompt yamatowanpi <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] # Training Details Danbooruのファンアート91枚をそのままで20回繰り返し×エポック3 バッチサイズ2×768size×64dim 正規化画像無し Danbooru's 91 pieces of fan art as is, repeated 20 times x Epoch 3 Batch size 2 x 768size x 64dim No normalized image ## Training Data NovelAI_NSFW <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed]
BossLee/t5-gec
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
6
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: model2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2319 - Accuracy: 0.9479 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 224 | 0.2074 | 0.9453 | | No log | 2.0 | 448 | 0.2421 | 0.9440 | | 0.2593 | 3.0 | 672 | 0.2319 | 0.9479 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
BotterHax/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-large-cased-sigir-LR100-0-prepend-40 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-cased-sigir-LR100-0-prepend-40 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1764 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 30 - eval_batch_size: 30 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6453 | 1.0 | 3 | 2.0522 | | 2.0488 | 2.0 | 6 | 1.7600 | | 1.9917 | 3.0 | 9 | 2.3036 | | 1.6084 | 4.0 | 12 | 1.4050 | | 1.856 | 5.0 | 15 | 1.3598 | | 1.6471 | 6.0 | 18 | 1.5274 | | 1.2358 | 7.0 | 21 | 1.6642 | | 1.4355 | 8.0 | 24 | 1.6109 | | 1.5753 | 9.0 | 27 | 1.8690 | | 1.5374 | 10.0 | 30 | 1.7986 | | 1.5063 | 11.0 | 33 | 1.4979 | | 1.2185 | 12.0 | 36 | 0.7390 | | 1.6042 | 13.0 | 39 | 1.1280 | | 1.1938 | 14.0 | 42 | 1.1252 | | 1.3215 | 15.0 | 45 | 1.6827 | | 1.0789 | 16.0 | 48 | 1.6349 | | 1.095 | 17.0 | 51 | 2.6303 | | 1.0088 | 18.0 | 54 | 0.9429 | | 1.015 | 19.0 | 57 | 1.4165 | | 1.2432 | 20.0 | 60 | 2.1061 | | 1.3365 | 21.0 | 63 | 1.5785 | | 1.2704 | 22.0 | 66 | 2.1850 | | 0.972 | 23.0 | 69 | 1.7769 | | 0.9052 | 24.0 | 72 | 1.5376 | | 0.976 | 25.0 | 75 | 2.1072 | | 1.1134 | 26.0 | 78 | 2.4425 | | 0.8328 | 27.0 | 81 | 1.5937 | | 1.1662 | 28.0 | 84 | 1.3542 | | 0.8575 | 29.0 | 87 | 1.2236 | | 0.728 | 30.0 | 90 | 1.2229 | | 1.1601 | 31.0 | 93 | 2.3723 | | 0.9426 | 32.0 | 96 | 1.6974 | | 0.8246 | 33.0 | 99 | 1.6610 | | 0.9777 | 34.0 | 102 | 1.1179 | | 0.7588 | 35.0 | 105 | 1.8809 | | 0.6929 | 36.0 | 108 | 1.9128 | | 0.6794 | 37.0 | 111 | 1.2689 | | 0.811 | 38.0 | 114 | 1.6715 | | 0.6805 | 39.0 | 117 | 2.0424 | | 0.9157 | 40.0 | 120 | 1.4210 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Branex/gpt-neo-2.7B
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: aj555/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
BrianTin/MTBERT
[ "pytorch", "jax", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: sgoodfriend/poca-SoccerTwos-v2 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Brokette/projetCS
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - generated_from_trainer datasets: - funsd model-index: - name: layoutlm-funsd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlm-funsd This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset. It achieves the following results on the evaluation set: - Loss: 0.7037 - Answer: {'precision': 0.7206703910614525, 'recall': 0.7972805933250927, 'f1': 0.7570422535211268, 'number': 809} - Header: {'precision': 0.3006993006993007, 'recall': 0.36134453781512604, 'f1': 0.3282442748091603, 'number': 119} - Question: {'precision': 0.7585004359197908, 'recall': 0.8169014084507042, 'f1': 0.7866184448462928, 'number': 1065} - Overall Precision: 0.7130 - Overall Recall: 0.7817 - Overall F1: 0.7458 - Overall Accuracy: 0.7989 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 1.7815 | 1.0 | 10 | 1.5703 | {'precision': 0.022222222222222223, 'recall': 0.022249690976514216, 'f1': 0.022235948116121063, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.21046643913538113, 'recall': 0.17370892018779344, 'f1': 0.19032921810699588, 'number': 1065} | 0.1202 | 0.1019 | 0.1103 | 0.3789 | | 1.4352 | 2.0 | 20 | 1.2331 | {'precision': 0.12166172106824925, 'recall': 0.10135970333745364, 'f1': 0.11058664868509778, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.4863247863247863, 'recall': 0.5342723004694836, 'f1': 0.50917225950783, 'number': 1065} | 0.3530 | 0.3266 | 0.3393 | 0.5662 | | 1.0804 | 3.0 | 30 | 0.9725 | {'precision': 0.4528985507246377, 'recall': 0.4635352286773795, 'f1': 0.4581551618814906, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.6272806255430061, 'recall': 0.6779342723004694, 'f1': 0.6516245487364621, 'number': 1065} | 0.5447 | 0.5504 | 0.5475 | 0.6845 | | 0.8495 | 4.0 | 40 | 0.7990 | {'precision': 0.5910973084886129, 'recall': 0.7058096415327565, 'f1': 0.6433802816901407, 'number': 809} | {'precision': 0.05970149253731343, 'recall': 0.03361344537815126, 'f1': 0.04301075268817204, 'number': 119} | {'precision': 0.6702222222222223, 'recall': 0.707981220657277, 'f1': 0.6885844748858447, 'number': 1065} | 0.6158 | 0.6668 | 0.6403 | 0.7510 | | 0.6866 | 5.0 | 50 | 0.7357 | {'precision': 0.6541436464088398, 'recall': 0.7317676143386898, 'f1': 0.6907817969661612, 'number': 809} | {'precision': 0.2235294117647059, 'recall': 0.15966386554621848, 'f1': 0.18627450980392157, 'number': 119} | {'precision': 0.7028619528619529, 'recall': 0.784037558685446, 'f1': 0.7412339103417664, 'number': 1065} | 0.6639 | 0.7255 | 0.6934 | 0.7698 | | 0.5626 | 6.0 | 60 | 0.6982 | {'precision': 0.6594871794871795, 'recall': 0.7948084054388134, 'f1': 0.7208520179372198, 'number': 809} | {'precision': 0.28378378378378377, 'recall': 0.17647058823529413, 'f1': 0.21761658031088082, 'number': 119} | {'precision': 0.6939417781274587, 'recall': 0.828169014084507, 'f1': 0.7551369863013697, 'number': 1065} | 0.6664 | 0.7757 | 0.7169 | 0.7872 | | 0.4875 | 7.0 | 70 | 0.6710 | {'precision': 0.6905286343612335, 'recall': 0.7750309023485785, 'f1': 0.7303436225975539, 'number': 809} | {'precision': 0.2336448598130841, 'recall': 0.21008403361344538, 'f1': 0.22123893805309733, 'number': 119} | {'precision': 0.7287145242070117, 'recall': 0.819718309859155, 'f1': 0.7715422006186478, 'number': 1065} | 0.6891 | 0.7652 | 0.7252 | 0.7924 | | 0.4499 | 8.0 | 80 | 0.6635 | {'precision': 0.6888412017167382, 'recall': 0.7935723114956736, 'f1': 0.7375071797817346, 'number': 809} | {'precision': 0.25210084033613445, 'recall': 0.25210084033613445, 'f1': 0.25210084033613445, 'number': 119} | {'precision': 0.7314814814814815, 'recall': 0.815962441314554, 'f1': 0.771415889924545, 'number': 1065} | 0.6883 | 0.7732 | 0.7283 | 0.7977 | | 0.3939 | 9.0 | 90 | 0.6686 | {'precision': 0.709070796460177, 'recall': 0.792336217552534, 'f1': 0.7483946293053124, 'number': 809} | {'precision': 0.24817518248175183, 'recall': 0.2857142857142857, 'f1': 0.265625, 'number': 119} | {'precision': 0.7311557788944724, 'recall': 0.819718309859155, 'f1': 0.7729083665338645, 'number': 1065} | 0.6926 | 0.7767 | 0.7323 | 0.7970 | | 0.3522 | 10.0 | 100 | 0.6728 | {'precision': 0.7094668117519043, 'recall': 0.8059332509270705, 'f1': 0.7546296296296295, 'number': 809} | {'precision': 0.3135593220338983, 'recall': 0.31092436974789917, 'f1': 0.31223628691983124, 'number': 119} | {'precision': 0.7573149741824441, 'recall': 0.8262910798122066, 'f1': 0.7903008531656939, 'number': 1065} | 0.7135 | 0.7873 | 0.7486 | 0.8034 | | 0.3124 | 11.0 | 110 | 0.6859 | {'precision': 0.7041800643086816, 'recall': 0.8121137206427689, 'f1': 0.7543053960964409, 'number': 809} | {'precision': 0.3076923076923077, 'recall': 0.3025210084033613, 'f1': 0.30508474576271183, 'number': 119} | {'precision': 0.7731316725978647, 'recall': 0.815962441314554, 'f1': 0.793969849246231, 'number': 1065} | 0.7185 | 0.7837 | 0.7497 | 0.8006 | | 0.306 | 12.0 | 120 | 0.6947 | {'precision': 0.720489977728285, 'recall': 0.799752781211372, 'f1': 0.7580550673696543, 'number': 809} | {'precision': 0.2773722627737226, 'recall': 0.31932773109243695, 'f1': 0.296875, 'number': 119} | {'precision': 0.7567332754126846, 'recall': 0.8178403755868544, 'f1': 0.7861010830324908, 'number': 1065} | 0.7118 | 0.7807 | 0.7447 | 0.7987 | | 0.283 | 13.0 | 130 | 0.6948 | {'precision': 0.7201783723522854, 'recall': 0.7985166872682324, 'f1': 0.7573270808909731, 'number': 809} | {'precision': 0.30597014925373134, 'recall': 0.3445378151260504, 'f1': 0.3241106719367589, 'number': 119} | {'precision': 0.7585004359197908, 'recall': 0.8169014084507042, 'f1': 0.7866184448462928, 'number': 1065} | 0.7149 | 0.7812 | 0.7466 | 0.8000 | | 0.2726 | 14.0 | 140 | 0.7002 | {'precision': 0.7119205298013245, 'recall': 0.7972805933250927, 'f1': 0.7521865889212828, 'number': 809} | {'precision': 0.3049645390070922, 'recall': 0.36134453781512604, 'f1': 0.3307692307692308, 'number': 119} | {'precision': 0.762532981530343, 'recall': 0.8140845070422535, 'f1': 0.787465940054496, 'number': 1065} | 0.7120 | 0.7802 | 0.7446 | 0.8001 | | 0.264 | 15.0 | 150 | 0.7037 | {'precision': 0.7206703910614525, 'recall': 0.7972805933250927, 'f1': 0.7570422535211268, 'number': 809} | {'precision': 0.3006993006993007, 'recall': 0.36134453781512604, 'f1': 0.3282442748091603, 'number': 119} | {'precision': 0.7585004359197908, 'recall': 0.8169014084507042, 'f1': 0.7866184448462928, 'number': 1065} | 0.7130 | 0.7817 | 0.7458 | 0.7989 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Brunomezenga/NN
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: lunared473/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Bryan190/Aguy190
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: Scrwed/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Brykee/BrykeeBot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: anidzk2 --- [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/Duskfallcrew/digital-vivid-memories) ### Digital Vivid Memories Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! If you want to donate towards costs and don't want to subscribe: https://ko-fi.com/DUSKFALLcrew If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk anidzk2 (use that on your prompt)
Bubb-les/DisloGPT-medium-HarryPotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: asr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # asr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
BumBelDumBel/ZORK_AI_FANTASY
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: t5-base-DreamBank-Generation-Char results: [] language: - en widget: - text: "I'm in an auditorium. Susie S is concerned at her part in this disability awareness spoof we are preparing. I ask, 'Why not do it? Lots of AB's represent us in a patronizing way. Why shouldn't we represent ourselves in a good, funny way?' I watch the video we all made. It is funny. I try to sit on a folding chair. Some guy in front talks to me. Merle is in the audience somewhere. [BL]" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-DreamBank-Generation-Char This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the DB emotion classification. It achieves the following results on the evaluation set (please note they refer to best uploaded model): - Loss: 0.3047 - Rouge1: 0.8609 - Rouge2: 0.7956 - Rougel: 0.8476 - Rougelsum: 0.8578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | No log | 1.0 | 24 | 0.4863 | 0.7670 | 0.6655 | 0.7575 | 0.7634 | | No log | 2.0 | 48 | 0.4284 | 0.6870 | 0.5207 | 0.6846 | 0.6875 | | No log | 3.0 | 72 | 0.3541 | 0.7659 | 0.6742 | 0.7600 | 0.7625 | | No log | 4.0 | 96 | 0.3211 | 0.8147 | 0.7251 | 0.7965 | 0.8078 | | No log | 5.0 | 120 | 0.3103 | 0.8400 | 0.7747 | 0.8313 | 0.8371 | | No log | 6.0 | 144 | 0.3220 | 0.8538 | 0.7867 | 0.8285 | 0.8515 | | No log | 7.0 | 168 | 0.3047 | 0.8609 | 0.7956 | 0.8476 | 0.8578 | | No log | 8.0 | 192 | 0.3106 | 0.8574 | 0.7836 | 0.8401 | 0.8509 | | No log | 9.0 | 216 | 0.3054 | 0.8532 | 0.7857 | 0.8378 | 0.8481 | | No log | 10.0 | 240 | 0.3136 | 0.8455 | 0.7789 | 0.8282 | 0.8432 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.5.1 - Tokenizers 0.12.1 ### Cite If you use the model, please cite the pre-print. ```bibtex @misc{https://doi.org/10.48550/arxiv.2302.14828, doi = {10.48550/ARXIV.2302.14828}, url = {https://arxiv.org/abs/2302.14828}, author = {Bertolini, Lorenzo and Elce, Valentina and Michalak, Adriana and Bernardi, Giulio and Weeds, Julie}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Automatic Scoring of Dream Reports' Emotional Content with Large Language Models}, publisher = {arXiv}, year = {2023}, copyright = {Creative Commons Attribution 4.0 International} } ```
BunakovD/sd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - breadlicker45/1m-YA-dataset train-eval-index: - config: default task: token-classification task_id: entity_extraction splits: eval_split: test col_mapping: tokens: tokens labels: tags ---
Bwehfuk/Ron
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: lunared473/ppo-PyramidsRND 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:1905.05700", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
42
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
18
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: Waterboy96/poca-Soccer3 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-msa
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 255.97 +/- 18.46 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CM-CA/DialoGPT-small-cartman
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Camzure/MaamiBot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: dn-gh/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Canyonevo/DialoGPT-medium-KingHenry
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - relbert/t_rex_relational_similarity model-index: - name: relbert/relbert-roberta-large-nce-e-t-rex results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.6395039682539683 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4385026737967914 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.44807121661721067 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5364091161756531 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.692 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4166666666666667 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4699074074074074 - task: name: Analogy Questions (ConceptNet Analogy) type: multiple-choice-qa dataset: name: ConceptNet Analogy args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.1535234899328859 - task: name: Analogy Questions (TREX Analogy) type: multiple-choice-qa dataset: name: TREX Analogy args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7431693989071039 - task: name: Analogy Questions (NELL-ONE Analogy) type: multiple-choice-qa dataset: name: NELL-ONE Analogy args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.63 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8882025011300286 - name: F1 (macro) type: f1_macro value: 0.8818868766881587 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8187793427230048 - name: F1 (macro) type: f1_macro value: 0.614523761077342 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6153846153846154 - name: F1 (macro) type: f1_macro value: 0.608439250967712 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9608402309243931 - name: F1 (macro) type: f1_macro value: 0.879616547843592 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8799749294891883 - name: F1 (macro) type: f1_macro value: 0.8773393679892617 --- # relbert/relbert-roberta-large-nce-e-t-rex RelBERT based on [roberta-large](https://huggingface.co/roberta-large) fine-tuned on [relbert/t_rex_relational_similarity](https://huggingface.co/datasets/relbert/t_rex_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning). This model achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-e-t-rex/raw/main/analogy.forward.json)): - Accuracy on SAT (full): 0.4385026737967914 - Accuracy on SAT: 0.44807121661721067 - Accuracy on BATS: 0.5364091161756531 - Accuracy on U2: 0.4166666666666667 - Accuracy on U4: 0.4699074074074074 - Accuracy on Google: 0.692 - Accuracy on ConceptNet Analogy: 0.1535234899328859 - Accuracy on T-Rex Analogy: 0.7431693989071039 - Accuracy on NELL-ONE Analogy: 0.63 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-e-t-rex/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8882025011300286 - Micro F1 score on CogALexV: 0.8187793427230048 - Micro F1 score on EVALution: 0.6153846153846154 - Micro F1 score on K&H+N: 0.9608402309243931 - Micro F1 score on ROOT09: 0.8799749294891883 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-e-t-rex/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.6395039682539683 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-large-nce-e-t-rex") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, ) ``` ### Training hyperparameters - model: roberta-large - max_length: 64 - epoch: 10 - batch: 32 - random_seed: 0 - lr: 5e-06 - lr_warmup: 10 - aggregation_mode: average_no_mask - data: relbert/t_rex_relational_similarity - data_name: filter_unified.min_entity_4_max_predicate_10 - exclude_relation: None - split: train - split_valid: validation - loss_function: nce - classification_loss: False - loss_function_config: {'temperature': 0.05, 'num_negative': 400, 'num_positive': 10} - augment_negative_by_positive: True See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-large-nce-e-t-rex/raw/main/finetuning_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.emnlp-main.712/). ``` @inproceedings{ushio-etal-2021-distilling, title = "Distilling Relation Embeddings from Pretrained Language Models", author = "Ushio, Asahi and Camacho-Collados, Jose and Schockaert, Steven", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.712", doi = "10.18653/v1/2021.emnlp-main.712", pages = "9044--9062", abstract = "Pre-trained language models have been found to capture a surprisingly rich amount of lexical knowledge, ranging from commonsense properties of everyday concepts to detailed factual knowledge about named entities. Among others, this makes it possible to distill high-quality word vectors from pre-trained language models. However, it is currently unclear to what extent it is possible to distill relation embeddings, i.e. vectors that characterize the relationship between two words. Such relation embeddings are appealing because they can, in principle, encode relational knowledge in a more fine-grained way than is possible with knowledge graphs. To obtain relation embeddings from a pre-trained language model, we encode word pairs using a (manually or automatically generated) prompt, and we fine-tune the language model such that relationally similar word pairs yield similar output vectors. We find that the resulting relation embeddings are highly competitive on analogy (unsupervised) and relation classification (supervised) benchmarks, even without any task-specific fine-tuning. Source code to reproduce our experimental results and the model checkpoints are available in the following repository: https://github.com/asahi417/relbert", } ```
Capreolus/birch-bert-large-car_mb
[ "pytorch", "tf", "jax", "bert", "next-sentence-prediction", "transformers" ]
null
{ "architectures": [ "BertForNextSentencePrediction" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- inference: true language: - en tags: - stable-diffusion - text-to-image license: creativeml-openrail-m --- # SD_Black_Ancient_Egyptian_Style is an open source Stable Diffusion embedding and model on art style of black Ancient Egypt for SD2.1, by Akumetsu971 (https://www.tiktok.com/@akumetsu971) --- ### What for ?: Ancient Egyptian theme for Men, Women, Animals, Egyptian Gods, Egyptian backgrounds, ### Model used to train: DreamBooth model based on SD v2-1_512-ema-pruned.ckpt Embedding based on SD v2-1_512-ema-pruned.ckpt ### Files Files available : - EMB_Blck_Egpt.zip (Best embedding version is around 1000 steps) - (model in development) - Blck_Egpt_DataSet (if you want to train your own model) - NG_DeepNegative_V1_75T (embedding used for negative prompt) ### Prompt Keyword for model is Bck_Egpt If the image is blurry, use an upscaller like: 4x_fatal_Anime_500000_G, 4x-AnimeSharp, 4x_NMKD-Siax_200k (they are all in my files) You may use NG_DeepNegative_V1_75T (in Files and Versions) ### Example for Embedding Positive Prompt: man with head of an hawk, hawk face, standing, art by EMB_Blck_Egpt_V4-1000 Negative Prompt: NG_DeepNegative_V1_75T, mediocre, average, bad, wrong, error, fault, badly_drawn, poorly_drawn, low_quality, no_quality, bad_quality, no_resolution, low_resolution, lowres, normal_resolution, disfigured, deformed, distortion, bad_anatomy, no_detail, low_detail, normal_detail, scribble, rushed, unfinished, blur, blurry, claws, misplaced, disconnected, nonsense, random, noise, deformation, 3d, dull, boring, uninteresting, screencap, text, frame, out_of_frame, title, description, sexual, text, error, logo, watermark, bad_perspective, bad_proportions, cinematic, jpg_artifacts, jpeg_artifacts, extra_leg, missing_leg, extra_arm, missing_arm, long_hand, bad_hands, mutated_hand, extra_finger, missing_finger, broken_finger, fused_fingers, extra_feet, missing_feet, fused_feet, long_feet, missing_limbs, extra_limbs, fused_limbs, claw, extra_digit, fewer_digits, elves_ears, naked, wet, uncensored, long_neck <img src="https://huggingface.co/Akumetsu971/SD_Black_Ancient_Egyptian_Style/resolve/main/Example1.png" width="50%"/> <img src="https://huggingface.co/Akumetsu971/SD_Black_Ancient_Egyptian_Style/resolve/main/Example2.png" width="50%"/> <img src="https://huggingface.co/Akumetsu971/SD_Black_Ancient_Egyptian_Style/resolve/main/Example3.png" width="50%"/> <img src="https://huggingface.co/Akumetsu971/SD_Black_Ancient_Egyptian_Style/resolve/main/Example4.png" width="50%"/> <img src="https://huggingface.co/Akumetsu971/SD_Black_Ancient_Egyptian_Style/resolve/main/Example5.png" width="50%"/> <img src="https://huggingface.co/Akumetsu971/SD_Black_Ancient_Egyptian_Style/resolve/main/Example6.png" width="50%"/> <img src="https://huggingface.co/Akumetsu971/SD_Black_Ancient_Egyptian_Style/resolve/main/Example7.png" width="50%"/> <img src="https://huggingface.co/Akumetsu971/SD_Black_Ancient_Egyptian_Style/resolve/main/Example8.png" width="50%"/> <img src="https://huggingface.co/Akumetsu971/SD_Black_Ancient_Egyptian_Style/resolve/main/Example9.png" width="50%"/>
Capreolus/birch-bert-large-mb
[ "pytorch", "tf", "jax", "bert", "next-sentence-prediction", "transformers" ]
null
{ "architectures": [ "BertForNextSentencePrediction" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 292.46 +/- 19.75 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Capreolus/electra-base-msmarco
[ "pytorch", "tf", "electra", "text-classification", "arxiv:2008.09093", "transformers" ]
text-classification
{ "architectures": [ "ElectraForSequenceClassification" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
110
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: kasrahabib/500-100-bucket-finetunned results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # kasrahabib/500-100-bucket-finetunned This model is a fine-tuned version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0050 - Validation Loss: 0.1358 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2800, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.3514 | 0.1493 | 0 | | 0.1166 | 0.1159 | 1 | | 0.0628 | 0.1066 | 2 | | 0.0282 | 0.1249 | 3 | | 0.0245 | 0.1338 | 4 | | 0.0181 | 0.1298 | 5 | | 0.0103 | 0.1246 | 6 | | 0.0085 | 0.1303 | 7 | | 0.0044 | 0.1343 | 8 | | 0.0050 | 0.1358 | 9 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
Carlork314/Xd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
Access to model Benson26400/kyoka is restricted and you are not in the authorized list. Visit https://huggingface.co/Benson26400/kyoka to ask for access.
CarlosTron/Yo
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="eugene-d/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Cathy/reranking_model
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: kasrahabib/0_50_-bucket-finetunned results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # kasrahabib/0_50_-bucket-finetunned This model is a fine-tuned version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6086 - Validation Loss: 0.7757 - Epoch: 6 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 364, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.6044 | 0.7757 | 0 | | 0.6022 | 0.7757 | 1 | | 0.5981 | 0.7757 | 2 | | 0.6062 | 0.7757 | 3 | | 0.6047 | 0.7757 | 4 | | 0.6055 | 0.7757 | 5 | | 0.6086 | 0.7757 | 6 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
dccuchile/albert-large-spanish-finetuned-pos
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- license: creativeml-openrail-m --- 6-WD+AOM2-SFW+炫彩厚涂 模型作者:八十八键
dccuchile/albert-tiny-spanish-finetuned-mldoc
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
null
--- license: creativeml-openrail-m language: - en library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion duplicated_from: Atre/MoonTea --- A stylized anime model.And You can use it with lora.MoonTea is my merge which were created by combining different models.
dccuchile/albert-tiny-spanish-finetuned-ner
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - classification - generated_from_trainer metrics: - accuracy model-index: - name: clasificador-muchocine results: [] datasets: - muchocine language: - es --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clasificador-muchocine This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3601 - Accuracy: 0.4826 ## Model description Predict of cinema reviews. ## Intended uses & limitations Trained as part of machine learning module at university. ## Training and evaluation data Small project. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 388 | 1.3546 | 0.4284 | | 1.3676 | 2.0 | 776 | 1.2768 | 0.4723 | | 0.9726 | 3.0 | 1164 | 1.3601 | 0.4826 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
dccuchile/albert-tiny-spanish-finetuned-pawsx
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: eldraco/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
dccuchile/albert-tiny-spanish-finetuned-qa-mlqa
[ "pytorch", "albert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "AlbertForQuestionAnswering" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -0.68 +/- 0.11 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
dccuchile/albert-tiny-spanish-finetuned-xnli
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
null
--- tags: - FrozenLake-v1-4x4 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4 type: FrozenLake-v1-4x4 metrics: - type: mean_reward value: 0.73 +/- 0.44 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="eugene-d/q-FrozenLake-v1-4x4", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
dccuchile/albert-xlarge-spanish-finetuned-mldoc
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 491.00 +/- 146.85 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Schwarzschild009 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Schwarzschild009 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Schwarzschild009 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
dccuchile/albert-xlarge-spanish-finetuned-ner
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: Clawoo/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
dccuchile/albert-xlarge-spanish-finetuned-pawsx
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
24
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="eugene-d/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
dccuchile/albert-xlarge-spanish-finetuned-pos
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 17.00 +/- 16.26 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
dccuchile/albert-xlarge-spanish-finetuned-qa-mlqa
[ "pytorch", "albert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "AlbertForQuestionAnswering" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-fine-tuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.6107419227947289 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-fine-tuned-cola This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8073 - Matthews Correlation: 0.6107 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4681 | 1.0 | 1069 | 0.5613 | 0.4892 | | 0.321 | 2.0 | 2138 | 0.6681 | 0.5851 | | 0.1781 | 3.0 | 3207 | 0.8073 | 0.6107 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
dccuchile/albert-xxlarge-spanish-finetuned-ner
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- tags: - generated_from_trainer model-index: - name: FPT_Viettel results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # FPT_Viettel This model is a fine-tuned version of [HuyenNguyen/FPT_medium](https://huggingface.co/HuyenNguyen/FPT_medium) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.4464 - eval_wer: 22.0855 - eval_runtime: 694.5327 - eval_samples_per_second: 1.778 - eval_steps_per_second: 0.112 - epoch: 2.54 - step: 1000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 24 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
dccuchile/albert-xxlarge-spanish-finetuned-pos
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: Pearson/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
dccuchile/albert-xxlarge-spanish-finetuned-xnli
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
68
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 50.70 +/- 37.16 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
dccuchile/albert-base-spanish
[ "pytorch", "tf", "albert", "pretraining", "es", "dataset:large_spanish_corpus", "transformers", "spanish", "OpenCENIA" ]
null
{ "architectures": [ "AlbertForPreTraining" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
586
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="s-himmi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
dccuchile/albert-tiny-spanish
[ "pytorch", "tf", "albert", "pretraining", "es", "dataset:large_spanish_corpus", "transformers", "spanish", "OpenCENIA" ]
null
{ "architectures": [ "AlbertForPreTraining" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
393
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: paraphraser-german-mt5-small results: [] datasets: - paws-x - tapaco language: - de metrics: - perplexity --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # paraphraser-german-mt5-small This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the paws-x (de) and tapaco (de) dataset. It achieves the following results on the evaluation set: - Loss: 1.7678 - Perplexity: 5.86 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.7064 | 0.05 | 2000 | 2.0731 | | 2.8673 | 0.11 | 4000 | 2.0420 | | 2.6133 | 0.16 | 6000 | 2.0080 | | 2.4563 | 0.21 | 8000 | 1.9556 | | 2.385 | 0.27 | 10000 | 1.9090 | | 2.3122 | 0.32 | 12000 | 1.9127 | | 2.2775 | 0.38 | 14000 | 1.8658 | | 2.2323 | 0.43 | 16000 | 1.8407 | | 2.17 | 0.48 | 18000 | 1.8342 | | 2.1672 | 0.54 | 20000 | 1.8328 | | 2.1488 | 0.59 | 22000 | 1.8071 | | 2.1026 | 0.64 | 24000 | 1.8328 | | 2.1036 | 0.7 | 26000 | 1.7979 | | 2.0854 | 0.75 | 28000 | 1.7895 | | 2.0594 | 0.81 | 30000 | 1.7944 | | 2.0793 | 0.86 | 32000 | 1.7726 | | 2.0661 | 0.91 | 34000 | 1.7762 | | 2.0722 | 0.97 | 36000 | 1.7714 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
dccuchile/albert-xlarge-spanish
[ "pytorch", "tf", "albert", "pretraining", "es", "dataset:large_spanish_corpus", "transformers", "spanish", "OpenCENIA" ]
null
{ "architectures": [ "AlbertForPreTraining" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
91
null
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1680.68 +/- 138.87 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
dccuchile/bert-base-spanish-wwm-cased-finetuned-pos
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3-classic results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="s-himmi/q-Taxi-v3-classic", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
dccuchile/bert-base-spanish-wwm-cased-finetuned-qa-mlqa
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 260.94 +/- 20.35 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
dccuchile/bert-base-spanish-wwm-cased-finetuned-xnli
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: creativeml-openrail-m tags: - text-to-image --- ### chltti style Dreambooth model trained by thewhiterider27 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-4 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: 17.jpg (use that on your prompt) 16.jpg (use that on your prompt) 15.jpg (use that on your prompt) 14.jpg (use that on your prompt) 13.jpg (use that on your prompt) 12.jpg (use that on your prompt) 11.jpg (use that on your prompt) 10.jpg (use that on your prompt) 9.jpg (use that on your prompt) 8.jpg (use that on your prompt) 7.jpg (use that on your prompt) 6.jpg (use that on your prompt) 5.jpg (use that on your prompt) 4.jpg (use that on your prompt) 3.jpg (use that on your prompt) 2.jpg (use that on your prompt) 1.jpg (use that on your prompt) ![1.jpg 0](https://huggingface.co/thewhiterider27/chltti-style/resolve/main/concept_images/1.jpg)![2.jpg 1](https://huggingface.co/thewhiterider27/chltti-style/resolve/main/concept_images/2.jpg)![3.jpg 2](https://huggingface.co/thewhiterider27/chltti-style/resolve/main/concept_images/3.jpg)![4.jpg 3](https://huggingface.co/thewhiterider27/chltti-style/resolve/main/concept_images/4.jpg)![5.jpg 4](https://huggingface.co/thewhiterider27/chltti-style/resolve/main/concept_images/5.jpg)![6.jpg 5](https://huggingface.co/thewhiterider27/chltti-style/resolve/main/concept_images/6.jpg)![7.jpg 6](https://huggingface.co/thewhiterider27/chltti-style/resolve/main/concept_images/7.jpg)![8.jpg 7](https://huggingface.co/thewhiterider27/chltti-style/resolve/main/concept_images/8.jpg)![9.jpg 8](https://huggingface.co/thewhiterider27/chltti-style/resolve/main/concept_images/9.jpg)![10.jpg 9](https://huggingface.co/thewhiterider27/chltti-style/resolve/main/concept_images/10.jpg)![11.jpg 10](https://huggingface.co/thewhiterider27/chltti-style/resolve/main/concept_images/11.jpg)![12.jpg 11](https://huggingface.co/thewhiterider27/chltti-style/resolve/main/concept_images/12.jpg)![13.jpg 12](https://huggingface.co/thewhiterider27/chltti-style/resolve/main/concept_images/13.jpg)![14.jpg 13](https://huggingface.co/thewhiterider27/chltti-style/resolve/main/concept_images/14.jpg)![15.jpg 14](https://huggingface.co/thewhiterider27/chltti-style/resolve/main/concept_images/15.jpg)![16.jpg 15](https://huggingface.co/thewhiterider27/chltti-style/resolve/main/concept_images/16.jpg)![17.jpg 16](https://huggingface.co/thewhiterider27/chltti-style/resolve/main/concept_images/17.jpg)
dccuchile/bert-base-spanish-wwm-uncased-finetuned-mldoc
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
39
null
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1617.55 +/- 276.20 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
dccuchile/bert-base-spanish-wwm-uncased-finetuned-pos
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: eldraco/ppo-pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
dccuchile/distilbert-base-spanish-uncased-finetuned-qa-mlqa
[ "pytorch", "distilbert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "DistilBertForQuestionAnswering" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: dn-gh/ppornd-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate-1
[ "pytorch", "distilbert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- library_name: stable-baselines3 tags: - PandaPushDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaPushDense-v2 type: PandaPushDense-v2 metrics: - type: mean_reward value: -9.32 +/- 4.88 name: mean_reward verified: false --- # **A2C** Agent playing **PandaPushDense-v2** This is a trained model of a **A2C** agent playing **PandaPushDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate
[ "pytorch", "distilbert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="nikz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Chakita/KROBERT
[ "pytorch", "roberta", "fill-mask", "transformers", "masked-lm", "fill-in-the-blanks", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: twitter-health-users results: [] widget: - text: >- We are the #UnitedNations’ health agency - #HealthForAll - text: >- Journal of Anesthesiology and Pain Therapy provides insight into original research and highlights the latest advancements in anesthesiology - text: >- Human First. EMDR Therapist | Field Instructor | Dog & Plant Mom - text: >- Board-certified #Dermatologist from @Harvard - text: >- Human. Person. Father. Huge Real Madrid fan --- Use this model to detect Twitter users' profiles related to healthcare. User profile classification may be useful when searching for health information on Twitter. For a certain health topic, tweets from physicians or organizations (e.g. ```Board-certified dermatologist```) may be more reliable than undefined or vague profiles (e.g. ```Human. Person. Father```). The model expects the user's ```description``` text field (see [Twitter API](https://developer.twitter.com/en/docs/twitter-api/v1/data-dictionary/object-model/user) docs) as input and returns a label for each profile: - `not-health-related` - `health-related` - `health-related/person` - `health-related/organization` - `health-related/publishing` - `health-related/physician` - `health-related/news` - `health-related/academic` F1 score is 0.9
Chakita/Kalbert
[ "pytorch", "tensorboard", "albert", "fill-mask", "transformers", "generated_from_trainer", "license:mit", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -2.40 +/- 0.51 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Chan/distilroberta-base-finetuned-wikitext2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en thumbnail: "https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/images/page1.jpg" license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - safetensors - diffusers inference: true --- **Lomo Diffusion** ![Header](https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/images/page1.jpg) [*CKPT DOWNLOAD LINK*](https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/lomo-1.0.ckpt) - - - [*SAFETENSORS DOWNLOAD LINK*](https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/lomo-1.0.safetensors) This is a dreambooth model trained on a diverse set of stylized photographs. Use the activation token **lomo style** in your prompt (I recommend at the start) This model is inspired by the Lomography movement, which embraces the imperfections and style of old LOMO cameras. The model excels at producing bright saturated colors as well as a variety of film artifacts that add to the illusion of a real photograph. When using most models, I typically use **blur haze** in my negative prompt. I encourage you to experiment and see what works well for you. Trained from 1.5 with VAE. Please see [this document where I share the parameters (prompt, sampler, seed, etc.) used for all example images.](https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/paramets_for_samples.txt) You can [see here a non-cherrypicked batch of 49 images here.](https://i.imgur.com/cfIj3iq.jpg) And you can [see here a direct comparison between Analog Style and Lomo Style.](https://i.imgur.com/ugdFzPI.jpg) ![Environments Example](https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/images/page2.jpg)
Chandanbhat/distilbert-base-uncased-finetuned-cola
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 650.50 +/- 348.01 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hectorjelly -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hectorjelly -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga hectorjelly ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```