modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
Declan/FoxNews_model_v8
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: creativeml-openrail-m tags: - pytorch - diffusers - stable-diffusion - text-to-image - diffusion-models-class - dreambooth-hackathon-notyet - animal widget: - text: a renaissance painting of catloxi cat wearing a crown sitting on a throne, elegant, close-up --- # DreamBooth model for the catloxi concept trained by nicolasneubauer on the nicolasneubauer/loxi dataset. This is a Stable Diffusion model fine-tuned on the catloxi concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of catloxi cat** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! ## Description This is a Stable Diffusion model fine-tuned on `cat` images for the animal theme. ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('nicolasneubauer/catloxi-cat') image = pipeline().images[0] image ```
Declan/HuffPost_model_v1
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Abhi03/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Declan/HuffPost_model_v2
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Abhi03/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Declan/Reuters_model_v3
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- language: - es license: apache-2.0 tags: - "national library of spain" - "spanish" - "bne" - "PlanTL-GOB-ES/MLDoc" - "text-classification" datasets: - "PlanTL-GOB-ES/MLDoc" metrics: - "f1" model-index: - name: roberta-base-bne-mldoc results: - task: type: text-classification dataset: type: mldoc name: PlanTL-GOB-ES/MLDoc metrics: - name: F1 type: f1 value: 0.9664 widget: - ' FRANCFORT, 17 feb (Reuter) - La Bolsa de Francfort abrió la sesión de corros con baja por la caída del viernes en Wall Street y una toma de beneficios. El dólar ayudaba a apuntalar al mercado, que pronto podría reanudar su tendencia alcista. Volkswagen bajaba por los daños ocasionados por la huelga de camioneros en España. Preussag participaba en un joint venture de exploración petrolífera en Filipinas con Atlantic Richfield Co. A las 0951 GMT, el Dax 30 bajaba 10,49 puntos, un 0,32 pct, a 3.237,69 tras abrir a un máximo de 3.237,69. (c) Reuters Limited 1997. ' --- # Spanish RoBERTa-base trained on BNE finetuned for the Spanish portion of the MLDoc dataset. ## Table of contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Evaluation](#evaluation) - [Variable and metrics](#variable-and-metrics) - [Evaluation results](#evaluation-results) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description The **roberta-base-bne-mldoc** is a text classification model for the Spanish language fine-tuned from the [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. The finetuning corpus consists of 14,458 news articles from Reuters classified in four categories: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets). ## Intended uses and limitations **roberta-base-bne-mldoc** model can be used to classify texts into four hierarchical groups: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets). The model is limited by its training dataset (news stories) and may not generalize well for all use cases. ## How to use Here is how to use this model: ```python from transformers import pipeline from pprint import pprint nlp = pipeline("text-classification", model="PlanTL-GOB-ES/roberta-base-bne-mldoc") example = ' FRANCFORT, 17 feb (Reuter) - La Bolsa de Francfort abrió la sesión de corros con baja por la caída del viernes en Wall Street y una toma de beneficios. El dólar ayudaba a apuntalar al mercado, que pronto podría reanudar su tendencia alcista. Volkswagen bajaba por los daños ocasionados por la huelga de camioneros en España. Preussag participaba en un joint venture de exploración petrolífera en Filipinas con Atlantic Richfield Co. A las 0951 GMT, el Dax 30 bajaba 10,49 puntos, un 0,32 pct, a 3.237,69 tras abrir a un máximo de 3.237,69. (c) Reuters Limited 1997. ' results = nlp(example) pprint(results) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training For training and evaluation we used the Spanish portion of the Multilingual Document Classification Corpus (MLDoc) [(Schwenk and Li, 2018)](http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf), a cross-lingual document classification dataset covering 8 languages. ### Training procedure The model was trained with a batch size of 32 and a learning rate of 1e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. ## Evaluation ### Variable and metrics This model was finetuned maximizing F1. ## Evaluation results We evaluated the *roberta-base-bne-mldoc* on the XNLI test set against standard multilingual and monolingual baselines: | Model | MLDoc (F1) | | ------------|:----| | roberta-base-bne | 96.64 | | roberta-large-bne | 97.02 | | BETO | **97.14** | | mBERT | 96.17 | | BERTIN | 96.68 | | ELECTRA | 95.65 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to <[email protected]> ### Copyright Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ## Citing information If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405): ``` @article{, abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.}, author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas}, doi = {10.26342/2022-68-3}, issn = {1135-5948}, journal = {Procesamiento del Lenguaje Natural}, keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural}, publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural}, title = {MarIA: Spanish Language Models}, volume = {68}, url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley}, year = {2022}, } ``` ## Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. </details>
Declan/Reuters_model_v4
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: t5-small-finetuned-xlsum-with-multi-news-10-epoch results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xlsum-with-multi-news-10-epoch This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2332 - Rouge1: 31.4802 - Rouge2: 9.9475 - Rougel: 24.6687 - Rougelsum: 24.7013 - Gen Len: 18.8025 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.7314 | 1.0 | 20543 | 2.3867 | 29.3997 | 8.2875 | 22.8406 | 22.8871 | 18.8204 | | 2.6652 | 2.0 | 41086 | 2.3323 | 30.3992 | 8.9058 | 23.6168 | 23.6626 | 18.8447 | | 2.632 | 3.0 | 61629 | 2.3002 | 30.8662 | 9.2869 | 24.0683 | 24.11 | 18.8122 | | 2.6221 | 4.0 | 82172 | 2.2785 | 31.143 | 9.5737 | 24.3473 | 24.381 | 18.7911 | | 2.5925 | 5.0 | 102715 | 2.2631 | 31.2144 | 9.6904 | 24.4419 | 24.4796 | 18.8133 | | 2.5812 | 6.0 | 123258 | 2.2507 | 31.3371 | 9.7959 | 24.5801 | 24.6166 | 18.7836 | | 2.5853 | 7.0 | 143801 | 2.2437 | 31.3593 | 9.8156 | 24.5533 | 24.5852 | 18.8103 | | 2.5467 | 8.0 | 164344 | 2.2377 | 31.368 | 9.8807 | 24.6226 | 24.6518 | 18.799 | | 2.5571 | 9.0 | 184887 | 2.2337 | 31.4356 | 9.9092 | 24.6543 | 24.6891 | 18.8075 | | 2.5563 | 10.0 | 205430 | 2.2332 | 31.4802 | 9.9475 | 24.6687 | 24.7013 | 18.8025 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.13.1+cpu - Datasets 2.8.0 - Tokenizers 0.10.3
Declan/Reuters_model_v8
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 650.00 +/- 222.46 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gstaff -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gstaff -f logs/ rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga gstaff ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Declan/WallStreetJournal_model_v1
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
2022-12-27T02:10:20Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: convnext-tiny-224-finetuned-eurosat-albumentations results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9814814814814815 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-tiny-224-finetuned-eurosat-albumentations This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0671 - Accuracy: 0.9815 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1452 | 1.0 | 190 | 0.1335 | 0.97 | | 0.0683 | 2.0 | 380 | 0.0825 | 0.9763 | | 0.0584 | 3.0 | 570 | 0.0671 | 0.9815 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
Declan/WallStreetJournal_model_v2
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 248.37 +/- 53.68 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DeepChem/SmilesTokenizer_PubChem_1M
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
227
null
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: ja datasets: - lmqg/qag_jaquad pipeline_tag: text2text-generation tags: - questions and answers generation widget: - text: "ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている6月28日は2人の14回目の結婚記念日であった。" example_title: "Questions & Answers Generation Example 1" model-index: - name: lmqg/mt5-small-jaquad-qag results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qag_jaquad type: default args: default metrics: - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) type: qa_aligned_f1_score_bertscore_question_answer_generation value: 58.35 - name: QAAlignedRecall-BERTScore (Question & Answer Generation) type: qa_aligned_recall_bertscore_question_answer_generation value: 58.38 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation) type: qa_aligned_precision_bertscore_question_answer_generation value: 58.34 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation) type: qa_aligned_f1_score_moverscore_question_answer_generation value: 39.19 - name: QAAlignedRecall-MoverScore (Question & Answer Generation) type: qa_aligned_recall_moverscore_question_answer_generation value: 39.17 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation) type: qa_aligned_precision_moverscore_question_answer_generation value: 39.21 --- # Model Card of `lmqg/mt5-small-jaquad-qag` This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for question & answer pair generation task on the [lmqg/qag_jaquad](https://huggingface.co/datasets/lmqg/qag_jaquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small) - **Language:** ja - **Training data:** [lmqg/qag_jaquad](https://huggingface.co/datasets/lmqg/qag_jaquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="ja", model="lmqg/mt5-small-jaquad-qag") # model prediction question_answer_pairs = model.generate_qa("フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め30数点しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-small-jaquad-qag") output = pipe("ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている6月28日は2人の14回目の結婚記念日であった。") ``` ## Evaluation - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-jaquad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_jaquad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-------------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 58.35 | default | [lmqg/qag_jaquad](https://huggingface.co/datasets/lmqg/qag_jaquad) | | QAAlignedF1Score (MoverScore) | 39.19 | default | [lmqg/qag_jaquad](https://huggingface.co/datasets/lmqg/qag_jaquad) | | QAAlignedPrecision (BERTScore) | 58.34 | default | [lmqg/qag_jaquad](https://huggingface.co/datasets/lmqg/qag_jaquad) | | QAAlignedPrecision (MoverScore) | 39.21 | default | [lmqg/qag_jaquad](https://huggingface.co/datasets/lmqg/qag_jaquad) | | QAAlignedRecall (BERTScore) | 58.38 | default | [lmqg/qag_jaquad](https://huggingface.co/datasets/lmqg/qag_jaquad) | | QAAlignedRecall (MoverScore) | 39.17 | default | [lmqg/qag_jaquad](https://huggingface.co/datasets/lmqg/qag_jaquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qag_jaquad - dataset_name: default - input_types: ['paragraph'] - output_types: ['questions_answers'] - prefix_types: None - model: google/mt5-small - max_length: 512 - max_length_output: 256 - epoch: 18 - batch: 8 - lr: 0.001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.0 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-jaquad-qag/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
DeepESP/gpt2-spanish-medium
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "es", "dataset:ebooks", "transformers", "GPT-2", "Spanish", "ebooks", "nlg", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
340
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="quartz14/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
DeepESP/gpt2-spanish
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "es", "dataset:ebooks", "transformers", "GPT-2", "Spanish", "ebooks", "nlg", "license:mit", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,463
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi-frozenlake results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.77 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="quartz14/taxi-frozenlake", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
DeepPavlov/bert-base-bg-cs-pl-ru-cased
[ "pytorch", "jax", "bert", "feature-extraction", "bg", "cs", "pl", "ru", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,614
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 799.00 +/- 199.62 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Iamvincent -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Iamvincent -f logs/ rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Iamvincent ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 200000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 2000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
DeepPavlov/bert-base-multilingual-cased-sentence
[ "pytorch", "jax", "bert", "feature-extraction", "multilingual", "arxiv:1704.05426", "arxiv:1809.05053", "arxiv:1908.10084", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
140
null
--- language: - zh library_name: transformers pipeline_tag: text2text-generation --- ```python from transformers import pipeline text_generator = pipeline("text-generation", model="svjack/T5-daliy-dialogue") text_generator( "你饿吗?" ) ''' [ { "generated_text": "是的,我快饿死了" } ] ''' ```
DeepPavlov/distilrubert-tiny-cased-conversational-v1
[ "pytorch", "distilbert", "ru", "arxiv:2205.02340", "transformers" ]
null
{ "architectures": null, "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9,141
null
--- language: - zh library_name: transformers pipeline_tag: text2text-generation --- ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("svjack/T5-dialogue-choose") model = AutoModelForSeq2SeqLM.from_pretrained("svjack/T5-dialogue-choose") text = ''' 根据如下上下文,选择最优的后续句子 上下文:如何成为程序员? 选项1:多练习做菜。 选项2:了解多门编程语言。 选项3:看几本历史书。 答案: ''' tokenizer.decode( model.generate( tokenizer.encode( text, return_tensors="pt", add_special_tokens=True ))[0], skip_special_tokens = True ) ''' 了解多门编程语言。 ''' ```
DeepPavlov/marianmt-tatoeba-ruen
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- tags: - autotrain - text-classification language: - zh widget: - text: "I love AutoTrain 🤗" datasets: - paulkm/autotrain-data-lottery_prod co2_eq_emissions: emissions: 11.554897545219454 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 2626879382 - CO2 Emissions (in grams): 11.5549 ## Validation Metrics - Loss: 0.146 - Accuracy: 0.960 - Precision: 0.967 - Recall: 0.944 - AUC: 0.986 - F1: 0.955 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/paulkm/autotrain-lottery_prod-2626879382 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("paulkm/autotrain-lottery_prod-2626879382", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("paulkm/autotrain-lottery_prod-2626879382", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
DeepPavlov/roberta-large-winogrande
[ "pytorch", "roberta", "text-classification", "en", "dataset:winogrande", "arxiv:1907.11692", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
348
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: carlosmirandad/rl-class-ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
DeepPavlov/rubert-base-cased-conversational
[ "pytorch", "jax", "bert", "feature-extraction", "ru", "transformers", "has_space" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
17,362
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-Regression-Edmunds_Car_Reviews-all_car_brands results: [] language: - en --- # distilbert-base-uncased-Regression-Edmunds_Car_Reviews-all_car_brands This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2232 - Mse: 0.2232 - Rmse: 0.4724 - Mae: 0.3150 ## Model description This project works to predict the rating of a car based on the review for all automanufacturers. For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/NLP%20Regression/Edmunds%20Car%20Reviews%20(All%20Brands)/Edmunds_Consumer_car-Regression-All%20Manufacturers.ipynb ## Intended uses & limitations This model is intended to demonstrate my ability to solve a complex problem using technology. ## Training and evaluation data Dataset Source: https://www.kaggle.com/datasets/ankkur13/edmundsconsumer-car-ratings-and-reviews ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mse | Rmse | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 0.3936 | 1.0 | 2592 | 0.2282 | 0.2282 | 0.4777 | 0.3158 | | 0.2163 | 2.0 | 5184 | 0.2160 | 0.2160 | 0.4647 | 0.3106 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1 - Datasets 2.4.0 - Tokenizers 0.12.1
DeepPavlov/rubert-base-cased-sentence
[ "pytorch", "jax", "bert", "feature-extraction", "ru", "arxiv:1508.05326", "arxiv:1809.05053", "arxiv:1908.10084", "transformers", "has_space" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
46,991
null
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: fr datasets: - lmqg/qg_frquad pipeline_tag: text2text-generation tags: - answer extraction widget: - text: "Pourtant, la strophe spensérienne, utilisée cinq fois avant que ne commence le chœur, constitue en soi un vecteur dont les répétitions structurelles, selon Ricks, relèvent du pur lyrisme tout en constituant une menace potentielle. Après les huit sages pentamètres iambiques, l'alexandrin final <hl> permet une pause <hl>, « véritable illusion d'optique » qu'accentuent les nombreuses expressions archaïsantes telles que did swoon, did seem, did go, did receive, did make, qui doublent le prétérit en un temps composé et paraissent à la fois « très précautionneuses et très peu pressées »." example_title: "Answering Extraction Example 1" - text: "Néanmoins, une fois encore, l'arithmétique modulaire est insuffisante pour venir à bout du théorème. Dirichlet utilise de nombreuses techniques analytiques, comme les séries entières et l'analyse complexe. Le fruit de ces travaux donne naissance à une nouvelle branche des mathématiques : la théorie analytique des nombres. L'un des points cruciaux de cette théorie provient de l'unique article de <hl> Bernhard Riemann <hl> en théorie des nombres : Sur le nombre de nombres premiers inférieurs à une taille donnée. Il conjecture une localisation des racines de sa fonction ζ. La recherche de la position des racines, initiée par Dirichlet, devient une préoccupation centrale et reste l'une des conjectures pressenties comme les plus difficiles des mathématiques de notre époque." example_title: "Answering Extraction Example 2" model-index: - name: lmqg/mt5-small-frquad-ae results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_frquad type: default args: default metrics: - name: BLEU4 (Answer Extraction) type: bleu4_answer_extraction value: 22.44 - name: ROUGE-L (Answer Extraction) type: rouge_l_answer_extraction value: 39.55 - name: METEOR (Answer Extraction) type: meteor_answer_extraction value: 32.89 - name: BERTScore (Answer Extraction) type: bertscore_answer_extraction value: 85.04 - name: MoverScore (Answer Extraction) type: moverscore_answer_extraction value: 72.46 - name: AnswerF1Score (Answer Extraction) type: answer_f1_score__answer_extraction value: 59.77 - name: AnswerExactMatch (Answer Extraction) type: answer_exact_match_answer_extraction value: 39.05 --- # Model Card of `lmqg/mt5-small-frquad-ae` This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for answer extraction on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small) - **Language:** fr - **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="fr", model="lmqg/mt5-small-frquad-ae") # model prediction answers = model.generate_a("Créateur » (Maker), lui aussi au singulier, « le Suprême Berger » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-small-frquad-ae") output = pipe("Pourtant, la strophe spensérienne, utilisée cinq fois avant que ne commence le chœur, constitue en soi un vecteur dont les répétitions structurelles, selon Ricks, relèvent du pur lyrisme tout en constituant une menace potentielle. Après les huit sages pentamètres iambiques, l'alexandrin final <hl> permet une pause <hl>, « véritable illusion d'optique » qu'accentuent les nombreuses expressions archaïsantes telles que did swoon, did seem, did go, did receive, did make, qui doublent le prétérit en un temps composé et paraissent à la fois « très précautionneuses et très peu pressées ».") ``` ## Evaluation - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-frquad-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_frquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 39.05 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | AnswerF1Score | 59.77 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | BERTScore | 85.04 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_1 | 34.18 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_2 | 29.45 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_3 | 25.72 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_4 | 22.44 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | METEOR | 32.89 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | MoverScore | 72.46 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | ROUGE_L | 39.55 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_frquad - dataset_name: default - input_types: ['paragraph_sentence'] - output_types: ['answer'] - prefix_types: None - model: google/mt5-small - max_length: 512 - max_length_output: 32 - epoch: 23 - batch: 32 - lr: 0.0005 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-frquad-ae/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
DeepPavlov/rubert-base-cased
[ "pytorch", "jax", "bert", "feature-extraction", "ru", "arxiv:1905.07213", "transformers", "has_space" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
148,127
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 276.71 +/- 19.60 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DeepPavlov/xlm-roberta-large-en-ru-mnli
[ "pytorch", "xlm-roberta", "text-classification", "en", "ru", "dataset:glue", "dataset:mnli", "transformers", "xlm-roberta-large", "xlm-roberta-large-en-ru", "xlm-roberta-large-en-ru-mnli", "has_space" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
227
null
--- license: creativeml-openrail-m --- A free iOS and MacOS app for you to generate AI arts on the go: https://apps.apple.com/us/app/%E7%94%BB%E5%83%8F%E7%94%9F%E6%88%90ai-%E3%83%AD%E3%83%BC%E3%82%AB%E3%83%AB-stable-diffusion/id1658459595 This is made possible by using the newest Apple Core ML tool and the pipeline and model files from Hugging Face. This repository hosts the zipped files for the resources used by the application, for the user to generate the model on iPhone/iPad/or Mac.
DeepPavlov/xlm-roberta-large-en-ru
[ "pytorch", "xlm-roberta", "feature-extraction", "en", "ru", "transformers" ]
feature-extraction
{ "architectures": [ "XLMRobertaModel" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
190
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on SetFit/emotion. It achieves the following results on the evaluation set: - Loss: 0.2276 - Accuracy: 0.921 - F1: 0.9209 ## Model description This model follows chapter 2 of https://github.com/nlp-with-transformers/notebooks. A few things that were changed from the original notebook: - the emotion dataset has moved to SetFit/emotion https://github.com/nlp-with-transformers/notebooks/issues/77 - the new dataset doesn't have ClassLabel feature so needed to change int2str method https://github.com/nlp-with-transformers/notebooks/issues/77 - made the label names on inference API human-readable with https://discuss.huggingface.co/t/change-label-names-on-inference-api/3063/3 - function to inspect dataset for existence of certain strings ## Intended uses & limitations ## Training and evaluation data ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8732 | 1.0 | 250 | 0.3279 | 0.9055 | 0.9037 | | 0.259 | 2.0 | 500 | 0.2276 | 0.921 | 0.9209 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.13.0+cu116 - Datasets 1.16.1 - Tokenizers 0.10.3
DeltaHub/lora_t5-base_mrpc
[ "pytorch", "transformers" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('MrDivakaruni/sd-class-butterflies-32') image = pipeline().images[0] image ```
DemangeJeremy/4-sentiments-with-flaubert
[ "pytorch", "flaubert", "text-classification", "fr", "transformers", "sentiments", "french", "flaubert-large" ]
text-classification
{ "architectures": [ "FlaubertForSequenceClassification" ], "model_type": "flaubert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
226
null
--- license: openrail --- # Oud (عود) Unconditional Diffusion The Oud is one of the most foundational instruments to all of Arab music. It can be heard in nearly every song, whether the subgenre is rooted in pop or classical music. Its distinguishing sound can be picked out of a crowd of string instruments with little to no training. Our Unconditional Diffusion model ensures that we show respect to the sound and culture it has created. This project could not have been done without [the following audio diffusion tools.](https://github.com/teticio/audio-diffusion) ## Usage Usage of this model is no different from any other audio diffusion model from HuggingFace. ```python import torch from diffusers import DiffusionPipeline # Setup device and create generator device = "cuda" if torch.cuda.is_available() else "cpu" generator = torch.Generator(device=device) # Instantiate model model_id = "mijwiz-laboratories/oud_diffusion_unconditional_256" audio_diffusion = DiffusionPipeline.from_pretrained(model_id).to(device) # Set seed for generator seed = generator.seed() generator.manual_seed(seed) # Run inference output = audio_diffusion(generator=generator) image = output.images[0] # Mel spectrogram generated audio = output.audios[0, 0] # Playable audio file ``` ## Limitations of Model The dataset used was very small, so the diversity of snippets that can be generated is rather limited. Furthermore, with high intensity segments (think a human playing the instrument with high intensity,) the realism/naturalness of the generated oud samples degrades.
Deniskin/essays_small_2000
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: other --- ## / 7th Layer / <img src="https://i.imgur.com/MjnczlB.png" width="1700" height=""> # (Important Notice:1.6) default CFG Scale : 7 ±5 default Sampler : DPM++ 2M Karras default Steps : 25 Negative prompt : (worst quality:1.4), (low quality:1.4) , (monochrome:1.1), # Don't write a lot of "Negative prompt". <img src="https://i.imgur.com/tE3PUBi.png" width="480" height=""> ## Test Model https://huggingface.co/syaimu/7th_test <img src="https://i.imgur.com/0xKIUvL.jpg" width="1700" height=""> <img src="https://i.imgur.com/lFZAYVv.jpg" width="1700" height=""> <img src="https://i.imgur.com/4IYqlYq.jpg" width="1700" height=""> <img src="https://i.imgur.com/v2pn57R.jpg" width="1700" height=""> # 7th_anime_v2.5_B → 7th_anime_v2_G <img src="https://i.imgur.com/K3o28Ci.jpg" width="1700" height=""> <img src="https://i.imgur.com/Bzywbkp.jpg" width="1700" height=""> # other <img src="https://i.imgur.com/oCZyzdA.jpg" width="1700" height=""> <img src="https://i.imgur.com/sAw842D.jpg" width="1700" height=""> <img src="https://i.imgur.com/lzuYVh0.jpg" width="1700" height=""> <img src="https://i.imgur.com/dOXsoeg.jpg" width="1700" height="">
DeskDown/MarianMixFT_en-ms
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
# Gender-and-Age-Detection <img alt="GitHub" src="https://img.shields.io/github/license/smahesh29/Gender-and-Age-Detection"> <h2>Objective :</h2> <p>To build a gender and age detector that can approximately guess the gender and age of the person (face) in a picture or through webcam.</p> <h2>About the Project :</h2> <p>In this Python Project, I had used Deep Learning to accurately identify the gender and age of a person from a single image of a face. I used the models trained by <a href="https://talhassner.github.io/home/projects/Adience/Adience-data.html">Tal Hassner and Gil Levi</a>. The predicted gender may be one of ‘Male’ and ‘Female’, and the predicted age may be one of the following ranges- (0 – 2), (4 – 6), (8 – 12), (15 – 20), (25 – 32), (38 – 43), (48 – 53), (60 – 100) (8 nodes in the final softmax layer). It is very difficult to accurately guess an exact age from a single image because of factors like makeup, lighting, obstructions, and facial expressions. And so, I made this a classification problem instead of making it one of regression.</p> <h2>Dataset :</h2> <p>For this python project, I had used the Adience dataset; the dataset is available in the public domain and you can find it <a href="https://www.kaggle.com/ttungl/adience-benchmark-gender-and-age-classification">here</a>. This dataset serves as a benchmark for face photos and is inclusive of various real-world imaging conditions like noise, lighting, pose, and appearance. The images have been collected from Flickr albums and distributed under the Creative Commons (CC) license. It has a total of 26,580 photos of 2,284 subjects in eight age ranges (as mentioned above) and is about 1GB in size. The models I used had been trained on this dataset.</p> <h2>Additional Python Libraries Required :</h2> <ul> <li>OpenCV</li> pip install opencv-python </ul> <ul> <li>argparse</li> pip install argparse </ul> <h2>The contents of this Project :</h2> <ul> <li>opencv_face_detector.pbtxt</li> <li>opencv_face_detector_uint8.pb</li> <li>age_deploy.prototxt</li> <li>age_net.caffemodel</li> <li>gender_deploy.prototxt</li> <li>gender_net.caffemodel</li> <li>a few pictures to try the project on</li> <li>detect.py</li> </ul> <p>For face detection, we have a .pb file- this is a protobuf file (protocol buffer); it holds the graph definition and the trained weights of the model. We can use this to run the trained model. And while a .pb file holds the protobuf in binary format, one with the .pbtxt extension holds it in text format. These are TensorFlow files. For age and gender, the .prototxt files describe the network configuration and the .caffemodel file defines the internal states of the parameters of the layers.</p> <h2>Usage :</h2> <ul> <li>Download my Repository</li> <li>Open your Command Prompt or Terminal and change directory to the folder where all the files are present.</li> <li><b>Detecting Gender and Age of face in Image</b> Use Command :</li> python detect.py --image <image_name> </ul> <p><b>Note: </b>The Image should be present in same folder where all the files are present</p> <ul> <li><b>Detecting Gender and Age of face through webcam</b> Use Command :</li> python detect.py </ul> <ul> <li>Press <b>Ctrl + C</b> to stop the program execution.</li> </ul> # Working: [![Watch the video](https://img.youtube.com/vi/ReeccRD21EU/0.jpg)](https://youtu.be/ReeccRD21EU) <h2>Examples :</h2> <p><b>NOTE:- I downloaded the images from Google,if you have any query or problem i can remove them, i just used it for Educational purpose.</b></p> >python detect.py --image girl1.jpg Gender: Female Age: 25-32 years <img src="Example/Detecting age and gender girl1.png"> >python detect.py --image girl2.jpg Gender: Female Age: 8-12 years <img src="Example/Detecting age and gender girl2.png"> >python detect.py --image kid1.jpg Gender: Male Age: 4-6 years <img src="Example/Detecting age and gender kid1.png"> >python detect.py --image kid2.jpg Gender: Female Age: 4-6 years <img src="Example/Detecting age and gender kid2.png"> >python detect.py --image man1.jpg Gender: Male Age: 38-43 years <img src="Example/Detecting age and gender man1.png"> >python detect.py --image man2.jpg Gender: Male Age: 25-32 years <img src="Example/Detecting age and gender man2.png"> >python detect.py --image woman1.jpg Gender: Female Age: 38-43 years <img src="Example/Detecting age and gender woman1.png"> # Support : If you found this project helpful or you learned something from the source code and want to thank me, consider me to pay my internet bills. This would encourage me to create many such projects 👨🏻‍💻 <ul> <li><a href="https://www.paypal.me/smahesh29"><b>PayPal</b></a></li> <li><a href="https://imjo.in/XNZDCJ"><b>₹ (INR)</b></a></li> <li><b>UPI ID :</b> maheshusa29@oksbi</li> </ul>
DeskDown/MarianMix_en-ja-10
[ "pytorch", "tensorboard", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 296.18 +/- 13.50 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DeskDown/MarianMix_en-zh-10
[ "pytorch", "tensorboard", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: mt5-small-finetuned-amazon-en-es results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0346 - Rouge1: 16.8527 - Rouge2: 8.331 - Rougel: 16.4475 - Rougelsum: 16.6421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 6.7536 | 1.0 | 1209 | 3.2881 | 13.6319 | 5.4635 | 13.0552 | 13.1093 | | 3.9312 | 2.0 | 2418 | 3.1490 | 16.8402 | 8.3559 | 16.1876 | 16.2869 | | 3.5987 | 3.0 | 3627 | 3.1043 | 17.9887 | 9.3136 | 17.3034 | 17.4313 | | 3.4261 | 4.0 | 4836 | 3.0573 | 17.0089 | 8.7389 | 16.5351 | 16.5023 | | 3.3221 | 5.0 | 6045 | 3.0569 | 16.8461 | 8.0988 | 16.4898 | 16.4927 | | 3.2549 | 6.0 | 7254 | 3.0511 | 17.3428 | 8.2234 | 16.7312 | 16.8749 | | 3.2067 | 7.0 | 8463 | 3.0334 | 16.268 | 7.9729 | 15.9342 | 16.0065 | | 3.1842 | 8.0 | 9672 | 3.0346 | 16.8527 | 8.331 | 16.4475 | 16.6421 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
Dhito/am
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
Access to model Faisalrahi/pytorch_model.bin is restricted and you are not in the authorized list. Visit https://huggingface.co/Faisalrahi/pytorch_model.bin to ask for access.
Dhritam/Zova-bot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 266.21 +/- 12.42 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Dhruva/Interstellar
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 739.50 +/- 213.92 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga newbie4000 -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga newbie4000 -f logs/ rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga newbie4000 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 2000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Dongmin/testmodel
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
11
null
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-ner-invoiceSenderRecipient-all-inv-26-12 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-ner-invoiceSenderRecipient-all-inv-26-12 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0245 - Precision: 0.8602 - Recall: 0.9015 - F1: 0.8804 - Accuracy: 0.9921 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0157 | 0.06 | 500 | 0.0286 | 0.8569 | 0.8605 | 0.8587 | 0.9908 | | 0.0156 | 0.11 | 1000 | 0.0287 | 0.8340 | 0.8940 | 0.8630 | 0.9910 | | 0.0141 | 0.17 | 1500 | 0.0296 | 0.8368 | 0.8853 | 0.8604 | 0.9908 | | 0.014 | 0.23 | 2000 | 0.0296 | 0.8356 | 0.8915 | 0.8627 | 0.9910 | | 0.0144 | 0.28 | 2500 | 0.0306 | 0.8310 | 0.8896 | 0.8593 | 0.9906 | | 0.0136 | 0.34 | 3000 | 0.0290 | 0.8384 | 0.8842 | 0.8607 | 0.9910 | | 0.0134 | 0.4 | 3500 | 0.0306 | 0.8514 | 0.8779 | 0.8645 | 0.9912 | | 0.0138 | 0.46 | 4000 | 0.0307 | 0.8475 | 0.8790 | 0.8630 | 0.9910 | | 0.0139 | 0.51 | 4500 | 0.0301 | 0.8208 | 0.9002 | 0.8587 | 0.9908 | | 0.014 | 0.57 | 5000 | 0.0320 | 0.8307 | 0.8981 | 0.8631 | 0.9909 | | 0.0155 | 0.63 | 5500 | 0.0307 | 0.8329 | 0.8992 | 0.8648 | 0.9909 | | 0.0154 | 0.68 | 6000 | 0.0268 | 0.8403 | 0.8971 | 0.8677 | 0.9913 | | 0.0178 | 0.74 | 6500 | 0.0269 | 0.8548 | 0.8869 | 0.8705 | 0.9916 | | 0.0177 | 0.8 | 7000 | 0.0268 | 0.8552 | 0.8904 | 0.8725 | 0.9917 | | 0.0178 | 0.85 | 7500 | 0.0267 | 0.8498 | 0.8972 | 0.8729 | 0.9917 | | 0.017 | 0.91 | 8000 | 0.0259 | 0.8517 | 0.8969 | 0.8737 | 0.9917 | | 0.0176 | 0.97 | 8500 | 0.0249 | 0.8523 | 0.8921 | 0.8717 | 0.9916 | | 0.0157 | 1.02 | 9000 | 0.0274 | 0.8535 | 0.8990 | 0.8757 | 0.9918 | | 0.0134 | 1.08 | 9500 | 0.0293 | 0.8375 | 0.9060 | 0.8704 | 0.9913 | | 0.0132 | 1.14 | 10000 | 0.0278 | 0.8648 | 0.8864 | 0.8755 | 0.9919 | | 0.0135 | 1.19 | 10500 | 0.0273 | 0.8540 | 0.8958 | 0.8744 | 0.9917 | | 0.0133 | 1.25 | 11000 | 0.0277 | 0.8442 | 0.9034 | 0.8728 | 0.9917 | | 0.0138 | 1.31 | 11500 | 0.0276 | 0.8484 | 0.9035 | 0.8751 | 0.9917 | | 0.0141 | 1.37 | 12000 | 0.0274 | 0.8501 | 0.9016 | 0.8751 | 0.9918 | | 0.0137 | 1.42 | 12500 | 0.0274 | 0.8529 | 0.9010 | 0.8763 | 0.9918 | | 0.014 | 1.48 | 13000 | 0.0269 | 0.8509 | 0.9022 | 0.8758 | 0.9919 | | 0.0141 | 1.54 | 13500 | 0.0260 | 0.8653 | 0.8926 | 0.8787 | 0.9920 | | 0.0149 | 1.59 | 14000 | 0.0258 | 0.8521 | 0.9048 | 0.8777 | 0.9919 | | 0.0149 | 1.65 | 14500 | 0.0257 | 0.8607 | 0.8980 | 0.8790 | 0.9921 | | 0.0152 | 1.71 | 15000 | 0.0257 | 0.8596 | 0.9001 | 0.8794 | 0.9920 | | 0.015 | 1.76 | 15500 | 0.0257 | 0.8556 | 0.9032 | 0.8788 | 0.9920 | | 0.0157 | 1.82 | 16000 | 0.0248 | 0.8620 | 0.8993 | 0.8802 | 0.9922 | | 0.0158 | 1.88 | 16500 | 0.0251 | 0.8573 | 0.9036 | 0.8798 | 0.9921 | | 0.0163 | 1.93 | 17000 | 0.0248 | 0.8579 | 0.9034 | 0.8800 | 0.9921 | | 0.017 | 1.99 | 17500 | 0.0245 | 0.8602 | 0.9015 | 0.8804 | 0.9921 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0 - Datasets 2.3.2 - Tokenizers 0.10.3
Doogie/Waynehills-KE-T5-doogie
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 160.35 +/- 100.26 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Waynehillsdev/Waynehills-STT-doogie-server
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
61
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="PakanunNoa/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Doohae/q_encoder
[ "pytorch" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 263.94 +/- 22.10 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Doxophobia/DialoGPT-medium-celeste
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- license: apache-2.0 --- ## 简介 Brief Introduction 1.1亿参数的ernie-1.0模型 ## 模型信息 Released Model Information 当前发布的pytorch版本ERNIE-base,是从huggingface下载并修复vocab和config对不上的问题。 This released pytorch model is downloading from huggingface, then fixed the vocab and config not matching issue. ## 使用 Usage ```python from transformers import AutoModelForMaskedLM, AutoTokenizer tokenizer=AutoTokenizer.from_pretrained("xiaoqin/ernie-1.0-base-zh") model=AutoModelForMaskedLM.from_pretrained("xiaoqin/ernie-1.0-base-zh") ```
DoyyingFace/bert-asian-hate-tweets-asian-clean-with-unclean-valid
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - timit_asr model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the timit_asr dataset. It achieves the following results on the evaluation set: - Loss: 0.4243 - Wer: 0.2830 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.5112 | 3.45 | 500 | 1.1699 | 0.8236 | | 0.5349 | 6.9 | 1000 | 0.3911 | 0.3609 | | 0.1875 | 10.34 | 1500 | 0.3993 | 0.3170 | | 0.1113 | 13.79 | 2000 | 0.3870 | 0.3046 | | 0.0778 | 17.24 | 2500 | 0.4056 | 0.2963 | | 0.0561 | 20.69 | 3000 | 0.3781 | 0.2918 | | 0.0461 | 24.14 | 3500 | 0.4186 | 0.2857 | | 0.0375 | 27.59 | 4000 | 0.4243 | 0.2830 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0 - Datasets 2.4.0 - Tokenizers 0.12.1
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-25
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 40 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 40, "warmup_steps": 4, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
albert-xlarge-v2
[ "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2,973
2022-12-27T10:14:22Z
--- library_name: stable-baselines3 tags: - CartPole-v1 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **PPO** Agent playing **CartPole-v1** This is a trained model of a **PPO** agent playing **CartPole-v1** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
bert-base-cased-finetuned-mrpc
[ "pytorch", "tf", "jax", "bert", "fill-mask", "transformers", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11,644
2022-12-27T10:20:45Z
--- license: mit --- ## Sentiment Analysis API This is the deployment part of the project. ### Training: Run the Google Colab notebook (Runtime = "GPU") (https://colab.research.google.com/drive/1EuF5FDl1X8VnuOO5RxzmM0c9TbtQrVm9?usp=sharing) ### Fine Tuning 1) Increasing #epochs 2) Increasing BATCH_SIZE to 32 3) Changing Adam optimizer rate ### Usage 1) Clone te repository 2) Set up the conda environment with requirements.txt 3) In terminal, run the command- uvivorn sentiment_analyzer.api:app --reload
bert-base-cased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8,621,271
2022-12-27T10:22:47Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: mitro99/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
bert-base-chinese
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "zh", "arxiv:1810.04805", "transformers", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3,377,486
2022-12-27T10:29:09Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="m00ra/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
bert-base-german-dbmdz-cased
[ "pytorch", "jax", "bert", "fill-mask", "de", "transformers", "license:mit", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,814
null
Access to model S-Sajad-Hosseini/q-Taxi-v3 is restricted and you are not in the authorized list. Visit https://huggingface.co/S-Sajad-Hosseini/q-Taxi-v3 to ask for access.
bert-base-multilingual-uncased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
328,585
2022-12-27T10:43:25Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="m00ra/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
bert-large-cased-whole-word-masking-finetuned-squad
[ "pytorch", "tf", "jax", "rust", "safetensors", "bert", "question-answering", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8,214
2022-12-27T10:46:48Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 0.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="S-Sajad-Hosseini/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
1n3skh/idk
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 714.50 +/- 219.04 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga lotek93 -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga lotek93 -f logs/ rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga lotek93 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 64), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 20000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Ab2021/bookst5
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-12-27T20:56:20Z
--- tags: - conversational --- # Black Doom DialoGPT Model
AdapterHub/bert-base-uncased-pf-comqa
[ "bert", "en", "dataset:com_qa", "arxiv:2104.08247", "adapter-transformers", "question-answering" ]
question-answering
{ "architectures": null, "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - unk tags: - autotrain - summarization datasets: - ell-hol/autotrain-data-test-orangesum widget: - text: I love AutoTrain 🤗 co2_eq_emissions: emissions: 675.7789931017469 model-index: - name: ell-hol/mT5-OrangeSum results: - task: type: summarization name: Summarization dataset: name: orange_sum type: orange_sum config: abstract split: validation metrics: - type: rouge value: 33.377 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDhjMWIxYmNmNDYzNTMzMDM2YjQyOTdkYjYyMDJkZDhlNzQ2ZDVkNGM2YTIzODU4ZWYwZDg2ODZkN2U5OTk2MSIsInZlcnNpb24iOjF9.UL_nv_GGJ75LMgDmRjvrp0dYhCyjz-h5txS1ljDFS7k9Yy6iJ0QnTebou1tsLFtj7sBSvUKvZeyqFXEHN7SBCg - type: rouge value: 14.4472 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTYxZTVkMzFlMGUxMWNmNzc5ZDI0OWM3ODY2ZTc1MDg2MDc2NTRiZjM3OTA4NGI1MmEwNzQzMjQyOWM5NDE3YiIsInZlcnNpb24iOjF9.xsBp4kyHAnAnAWllwvcXNF3vFFbgP_3Ipplg0Cs8yMzY2qIKozlflWSpmm7qyru1RvtDrHH5JQy0hSSz49tMDQ - type: rouge value: 24.1902 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzgxMDNmODZiOTcxYmU0NjlkMjEzOTBmZjZhMzkxZDcyODNjYmJjOGNiNzA2MTI2YjU4MTUzZTFlM2EwYjRkNyIsInZlcnNpb24iOjF9.QE9X1gqHxDA_Vzj86nOi1FrYXrvvYR-uQgAKn2ESJp48mnT4rHCnpxVo3qJGXcoeD0vA0M9VDWJzc2pci34PBA - type: rouge value: 25.5277 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDk2YzY1NjU3NDgxMDllYjIwMGI5NGE2ZjY3NzcxZGEwNmYzYjQxYzVlZTdmYzdkYWIxM2Y1YjkxNjZhOWRlZiIsInZlcnNpb24iOjF9.ksd-KgRtY71cHJxFsqLWr5lofRSrfiwixGTI6Hek6GvfisssetoDPy17bWnQpUqfN0ozxJciw2VzpauYPDuZCg - type: loss value: 1.6347737312316895 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDNmODJhNzdmMzNkMTc4MDcwZDhmNDFiZjM1ZWVmYjQ4N2IzNWU3MjYwMWM4ZmM0NjFhNjY1OTBlZjBkMjY0YSIsInZlcnNpb24iOjF9.aaF2D-cKnhK4YaqFV23QhoiTCOK7rQJKoXJMMj-kuxe_NLQBLNj73LBou376IlsTmOxxk_mmEimzwMMbTiVSDA - type: gen_len value: 48.4967 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzk3YjMxZWY2NzE5ZWMxZjBhYmE5YzU2YTM3MzNmMjlmNmJjM2MyMzY4ZTE1MjI1ZTNkN2YxOWZhOThmYzljMyIsInZlcnNpb24iOjF9._I_I9B66dT3S8RMMmMACG3YjIQYcXzmodriDWM33jRa4X6NFQx0b6_YHNP7K-uLEm8qD31bgb0NlsaRA37qLBA --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 2638979565 - CO2 Emissions (in grams): 675.7790 ## Validation Metrics - Loss: 1.631 - Rouge1: 33.348 - Rouge2: 14.481 - RougeL: 24.210 - RougeLsum: 25.514 - Gen Len: 48.497 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/ell-hol/autotrain-test-orangesum-2638979565 ```
AdapterHub/roberta-base-pf-conll2000
[ "roberta", "en", "dataset:conll2000", "arxiv:2104.08247", "adapter-transformers", "token-classification", "adapterhub:chunk/conll2000" ]
token-classification
{ "architectures": null, "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: - "zh" tags: - "chinese" - "token-classification" - "pos" - "dependency-parsing" datasets: - "universal_dependencies" license: "cc-by-sa-4.0" pipeline_tag: "token-classification" --- # deberta-base-chinese-ud-goeswith ## Model Description This is a DeBERTa(V2) model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [deberta-base-chinese-upos](https://huggingface.co/KoichiYasuoka/deberta-base-chinese-upos). ## How to Use ```py class UDgoeswith(object): def __init__(self,bert): from transformers import AutoTokenizer,AutoModelForTokenClassification self.tokenizer=AutoTokenizer.from_pretrained(bert) self.model=AutoModelForTokenClassification.from_pretrained(bert) def __call__(self,text): import numpy,torch,ufal.chu_liu_edmonds w=self.tokenizer(text,return_offsets_mapping=True) v=w["input_ids"] x=[v[0:i]+[self.tokenizer.mask_token_id]+v[i+1:]+[j] for i,j in enumerate(v[1:-1],1)] with torch.no_grad(): e=self.model(input_ids=torch.tensor(x)).logits.numpy()[:,1:-2,:] r=[1 if i==0 else -1 if j.endswith("|root") else 0 for i,j in sorted(self.model.config.id2label.items())] e+=numpy.where(numpy.add.outer(numpy.identity(e.shape[0]),r)==0,0,numpy.nan) g=self.model.config.label2id["X|_|goeswith"] r=numpy.tri(e.shape[0]) for i in range(e.shape[0]): for j in range(i+2,e.shape[1]): r[i,j]=r[i,j-1] if numpy.nanargmax(e[i,j-1])==g else 1 e[:,:,g]+=numpy.where(r==0,0,numpy.nan) m=numpy.full((e.shape[0]+1,e.shape[1]+1),numpy.nan) m[1:,1:]=numpy.nanmax(e,axis=2).transpose() p=numpy.zeros(m.shape) p[1:,1:]=numpy.nanargmax(e,axis=2).transpose() for i in range(1,m.shape[0]): m[i,0],m[i,i],p[i,0]=m[i,i],numpy.nan,p[i,i] h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] if [0 for i in h if i==0]!=[0]: m[:,0]+=numpy.where(m[:,0]==numpy.nanmax(m[[i for i,j in enumerate(h) if j==0],0]),0,numpy.nan) m[[i for i,j in enumerate(h) if j==0]]+=[0 if i==0 or j==0 else numpy.nan for i,j in enumerate(h)] h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] u="# text = "+text+"\n" v=[(s,e) for s,e in w["offset_mapping"] if s<e] for i,(s,e) in enumerate(v,1): q=self.model.config.id2label[p[i,h[i]]].split("|") u+="\t".join([str(i),text[s:e],"_",q[0],"_","|".join(q[1:-1]),str(h[i]),q[-1],"_","_" if i<len(v) and e<v[i][0] else "SpaceAfter=No"])+"\n" return u+"\n" nlp=UDgoeswith("KoichiYasuoka/deberta-base-chinese-ud-goeswith") print(nlp("我把这本书看完了")) ``` with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/). Or without ufal.chu-liu-edmonds: ``` from transformers import pipeline nlp=pipeline("universal-dependencies","KoichiYasuoka/deberta-base-chinese-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple") print(nlp("我把这本书看完了")) ```
Aimendo/Triage
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: nemanjar/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Ajteks/Chatbot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -55.72 +/- 62.76 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 5000000 'learning_rate': 0.0001 'num_envs': 16 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.999 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'alvarobb/ppo-LunarLander-v2' 'batch_size': 2048 'minibatch_size': 512} ```
Akaramhuggingface/News
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en tags: - stable-diffusion - text-to-image - image-to-image - diffusers license: creativeml-openrail-m inference: true --- **Big update DucHaitenAIart_v3.1** *Big update of DucHaitenAIart, v3.1 is able to receive more diverse, more detailed prompts with gorgeous colors and more realistic shadows. The image has the breath of 3D anime, but the material is much more realistic. The weak point is that some celebrity images are no longer in the model, a bit too 3d anime might make some people dislike, the image of the teeth is a bit lacking in detail. **Please support me by becoming a patron:** https://www.patreon.com/duchaitenreal ***** All sample images only use text to image, no editing, no image to image, no restore face no highres fix no extras. ***** Hello, sorry for my lousy english. After days of trying and retrying hundreds of times, with dozens of different versions, DucHaitenAIart finally released the official version. Improved image sharpness, more realistic lighting correction, more shooting angles, the only downside is that it's less flexible and less random than beta-v6.0, so I'm still will leave beta-v6.0 for anyone to download. This model can create NSFW images but since it is not a hentai and porn model, anything really hardcore will be difficult to create. But, To make the model work better with NSFW images, add “hentai, porn, rule 34” to the prompt Always add to the prompt “masterpiece, best quality, 1girl or 1boy, realistic, anime or cartoon (it's two different styles, but I personally prefer anime), 3D, pixar, (add “pin-up”) ” if you are going to give your character a sexy pose), highly detail eyes, perfect eyes, both eyes are the same, (if you don't want to draw eyes, don't add them), smooth, perfect face, hd, 2k, 4k , 8k, 16k Add to the prompt: “extremely detailed 8K, high resolution, ultra quality” to further enhance the image quality, but it may weaken the AI's interest in other keywords. You can add “glare, Iridescent, Global illumination, real hair movement, realistic light, realistic shadow” to the prompt to create a better lighting effect, but the image will then become too realistic, if you don't want to. Please adjust it accordingly. ***** Sampler: DPM++ 2S a Karras + negative prompt: illustration, painting, cartoons, sketch, (worst quality:2), (low quality:2), (normal quality:2), lowres, bad anatomy, bad hands, ((monochrome)), ((grayscale)), collapsed eyeshadow, multiple eyeblows, vaginas in breasts, (cropped), oversaturated, extra limb, missing limbs, deformed hands, long neck, long body, imperfect, (bad hands), signature, watermark, username, artist name, conjoined fingers, deformed fingers, ugly eyes, imperfect eyes, skewed eyes, unnatural face, unnatural body, error ***** Some test: ![37DAF310-BEC8-46BF-B68B-2B62D7FD570B.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/jQ46etkANhUUtz_uYQyR-.png) ![9F81AED9-3E11-4A63-865C-5679F74C05EC.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/jAGcOQ0UowwqRchlM96dX.png) ![F3E66D20-6AF2-4CE8-992D-F9AB9AC5AD72.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/ODtYR7NENJOPQkyE6u_Si.png) ![3AF4B5A5-34CA-463E-B9E3-782B54994BD1.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/sUWm2K5SDepcr77uBwJWL.png) ![D66DDA7C-083C-48EF-8C52-84043B529FEA.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/5qE_HgiZPKzTEZIluLSYG.png) ![8F8A0BB3-5C67-437B-B934-A79646A6393F.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/D9F9oVbp-FNk3PUSVJb0X.png) ![3C8BD0A8-ED28-4761-A3AA-846DDFC84EBA.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/50JoUeFchiV04cJk6g_5d.png) ![9170F73B-6EF5-42E1-A63C-A715F71100D0.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/jBDzWnSJjyafeJFryjTYF.png) ![0136D9DF-9CF2-4AF8-B6FB-265653ADCF1E.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/YyxzXcbAHyRLiyDX8TNuS.png) ![4EC453D0-3ADF-45D8-B9F3-E2ECECCDE1D4.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/Lbef0zbfrvHD1pQA9Eltf.png) ![81218795-34F5-474D-AB95-4D9C888D72D8.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/uBTH9T77fKGINCLPpeXWH.png) ![B8759911-5820-4704-800A-0F8647380AFA.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/20qL2iMQA-mnCDLo084Fe.png)
Akari/albert-base-v2-finetuned-squad
[ "pytorch", "tensorboard", "albert", "question-answering", "dataset:squad_v2", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
question-answering
{ "architectures": [ "AlbertForQuestionAnswering" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
2022-12-28T10:38:19Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: QRDQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 374.00 +/- 214.89 name: mean_reward verified: false --- # **QRDQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **QRDQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -orga PabloTa -f logs/ python enjoy.py --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -orga PabloTa -f logs/ rl_zoo3 enjoy --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga PabloTa ``` ## Hyperparameters ```python OrderedDict([('batch_size', 128), ('buffer_size', 25000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_fraction', 0.225), ('frame_stack', 3), ('learning_rate', 0.023), ('n_timesteps', 1000000.0), ('normalize', False), ('optimize_memory_usage', False), ('policy', 'CnnPolicy')]) ```
Akash7897/bert-base-cased-wikitext2
[ "pytorch", "tensorboard", "bert", "fill-mask", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
# **roberta-base model is fine tuned on Airline Passenger Complaint Dataset.** - model can be used to determine the sentiment of the statement
Akash7897/distilbert-base-uncased-finetuned-cola
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
2022-12-28T10:42:35Z
--- inference: true language: - en tags: - stable-diffusion - text-to-image - embedding license: wtfpl --- # rr_inferno embedding - SD 2.1 ## Exemple of Prompt "Portrait of a rr_inferno skull skeleton, made of lava, fire, giant, concept art, splash art, global lightning, artwork of a phoenix, nine tails, fiery, gorgeous digital painting" ## Training Trained for 850 steps. 20 images, 6 vectors. Batch size of 3, 3 grad acc steps, learning rate of 0.0004:200, 0.0002:400, 0.0001:1000,0.00005 ![img1.png](https://i.imgur.com/NBKORBP.png) ![img2.png](https://i.imgur.com/JFpRuTc.png) ![img3.png](https://i.imgur.com/FeaqZVV.png) ![img4.png](https://i.imgur.com/Tiu7Daa.png)
Akashpb13/Swahili_xlsr
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "sw", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### rbto3v2 Dreambooth model trained by rudzinskimaciej with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
Akashpb13/xlsr_kurmanji_kurdish
[ "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "kmr", "ku", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- tags: - autotrain - summarization language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - ell-hol/autotrain-data-mt5-dialogsum co2_eq_emissions: emissions: 248.06396898781733 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 2644579647 - CO2 Emissions (in grams): 248.0640 ## Validation Metrics - Loss: 1.316 - Rouge1: 40.914 - Rouge2: 16.140 - RougeL: 33.122 - RougeLsum: 35.661 - Gen Len: 34.075 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/ell-hol/autotrain-mt5-dialogsum-2644579647 ```
AkshaySg/GrammarCorrection
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 274.55 +/- 9.76 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AkshaySg/LanguageIdentification
[ "multilingual", "dataset:VoxLingua107", "LID", "spoken language recognition", "license:apache-2.0" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m tags: - pytorch - diffusers - stable-diffusion - text-to-image - diffusion-models-class - dreambooth-hackathon - wildcard widget: - text: a pkblz ball in the middle of a miniature jungle - text: a photo of a spectral ornate pkblz ball, trending on artstation, realistic - text: a colored pencil sketch of a pkblz ball --- # The Pokeball Machine The **Pokeball Machine** is a Dreambooth model for the `pokeball` concept (represented by the `pkblz` identifier). It applies to the *wildcard* theme. It is fine-tuned from `CompVis/stable-diffusion-v1-4` checkpoint on a small dataset of pokeball images (i.e., images of the red-white original pokeball). It can be used by modifying the `instance_prompt`: **a pkblz ball in the middle of a miniature jungle** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! #### Fine-Tuning Details - Number of training images: 31 - Learning rate: 2e-06 - Training steps: 800 - Guidance Scale: 10 - Inference Steps: 50-75 #### Output Examples <table> <tr> <td>a blueprint photo of a <b>pkblz</b> ball</td> <td>a photo of a cybernetic <b>pkblz</b> ball, wide shot</td> <td>a photo of a <b>pkblz</b> ball in the style vintage disney</td> </tr> <tr> <td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(1).png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(2).png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(3).png" style="height:200px"> </td> </tr> <tr> <td>a photo of a mosaic <b>pkblz</b> ball lying in an antique temple</td> <td>a photo of a detailed ornate <b>pkblz</b> ball</td> <td>a pkblz ball underwater</td> </tr> <tr> <td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(4).png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(5).png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(6).png" style="height:200px"> </td> </tr> <tr> <td>a <b>pkblz</b> ball in the middle of a miniature jungle</td> <td>a <b>pkblz</b> ball underwater</td> <td>a mystic <b>pkblz</b> ball, trending on artstation</td> </tr> <tr> <td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(7).png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(8).png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(9).png" style="height:200px"> </td> </tr> <tr> <td>a <b>pkblz</b> ball underwater, trending on artstation</td> <td>a wooden <b>pkblz</b> ball</td> <td>a <b>pkblz</b> ball hovering over a pond</td> </tr> <tr> <td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(10).png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(11).png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(12).png" style="height:200px"> </td> </tr> <tr> <td>a <b>pkblz</b> ball on a sunny tropical beach</td> <td>a steampunk <b>pkblz</b> ball, trending on artstation</td> <td>a colored pencil sketch of a <b>pkblz</b> ball</td> </tr> <tr> <td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(13).png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(14).png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(15).png" style="height:200px"> </td> </tr> <tr> <td>a photo of a spectral ornate <b>pkblz</b> ball, trending on artstation, realistic</td> <td>a sunset photo of a <b>pkblz</b> ball</td> <td>a watercolor photo of a <b>pkblz</b> ball</td> </tr> <tr> <td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(16).png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(17).png" style="height:200px"> </td> <td align="center"><img src="https://huggingface.co/simonschoe/pokeball-machine/resolve/main/output/pokeball%20(18).png" style="height:200px"> </td> </tr> </table> ## Usage ```python from diffusers import StableDiffusionPipeline import torch device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') pipeline = StableDiffusionPipeline.from_pretrained('simonschoe/pokeball-machine').to(device) prompt = "a pkblz ball in the middle of a miniature jungle" image = pipeline( prompt, num_inference_steps=50, guidance_scale=10, num_images_per_prompt=1 ).images[0] image ```
Aleksandar/electra-srb-oscar
[ "pytorch", "electra", "fill-mask", "transformers", "generated_from_trainer", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "ElectraForMaskedLM" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9458064516129032 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.2699 - Accuracy: 0.9458 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 9 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2203 | 1.0 | 318 | 3.1656 | 0.7532 | | 2.4201 | 2.0 | 636 | 1.5891 | 0.8558 | | 1.1961 | 3.0 | 954 | 0.8037 | 0.9152 | | 0.5996 | 4.0 | 1272 | 0.4888 | 0.9326 | | 0.3306 | 5.0 | 1590 | 0.3589 | 0.9439 | | 0.2079 | 6.0 | 1908 | 0.3070 | 0.9439 | | 0.1458 | 7.0 | 2226 | 0.2809 | 0.9458 | | 0.1155 | 8.0 | 2544 | 0.2740 | 0.9461 | | 0.1021 | 9.0 | 2862 | 0.2699 | 0.9458 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.13.0+cu116 - Datasets 1.16.1 - Tokenizers 0.10.3
Ana1315/ana
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('cm2300/sd-class-butterflies-32') image = pipeline().images[0] image ```
AnonARR/qqp-bert
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
38
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### wolde_eyu_5120512 Dreambooth model trained by Eyuel with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
AnonymousSub/AR_EManuals-BERT
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 260.80 +/- 23.52 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AnonymousSub/AR_cline
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2208 - Accuracy: 0.9205 - F1: 0.9207 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8134 | 1.0 | 250 | 0.3103 | 0.9055 | 0.9035 | | 0.2419 | 2.0 | 500 | 0.2208 | 0.9205 | 0.9207 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.1 - Datasets 2.6.1 - Tokenizers 0.11.0
AnonymousSub/AR_rule_based_roberta_only_classfn_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: creativeml-openrail-m tags: - pytorch - diffusers - stable-diffusion - text-to-image - diffusion-models-class - dreambooth-hackathon - food widget: - text: a photo of jairzza pizza in the Acropolis --- # DreamBooth model for the jairzza concept trained by jairNeto on the jairNeto/pizza dataset. This is a Stable Diffusion model fine-tuned on the jairzza concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of jairzza pizza** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! ## Description This is a Stable Diffusion model fine-tuned on `pizza` images for the food theme. ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('jairNeto/jairzza-pizza') image = pipeline().images[0] image ```
AnonymousSub/AR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 767.00 +/- 226.41 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga toastedshibe -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga toastedshibe -f logs/ rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga toastedshibe ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
AnonymousSub/AR_rule_based_roberta_twostagetriplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1841 - F1: 0.8550 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2971 | 1.0 | 1252 | 0.1979 | 0.8076 | | 0.1562 | 2.0 | 2504 | 0.1828 | 0.8404 | | 0.101 | 3.0 | 3756 | 0.1841 | 0.8550 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
AnonymousSub/AR_rule_based_roberta_twostagetriplet_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
2022-12-29T00:12:23Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 302.91 +/- 10.95 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AnonymousSub/AR_rule_based_twostage_quadruplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
2022-12-29T00:25:11Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: t5-small-finetuned-en-es results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-en-es This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8937 - Rouge1: 32.6939 - Rouge2: 11.794 - Rougel: 31.9982 - Rougelsum: 31.9902 - Gen Len: 15.7947 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.251 | 1.0 | 7061 | 1.8937 | 32.6939 | 11.794 | 31.9982 | 31.9902 | 15.7947 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
AnonymousSub/AR_rule_based_twostagequadruplet_hier_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
2022-12-29T00:35:01Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: ppo results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -297.00 +/- 106.21 name: mean_reward verified: false --- # **ppo** Agent playing **LunarLander-v2** This is a trained model of a **ppo** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AnonymousSub/AR_rule_based_twostagetriplet_hier_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
2022-12-29T00:43:36Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('renee127/sd-class-butterflies-32') image = pipeline().images[0] image ```
AnonymousSub/AR_specter
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - metrics: - type: mean_reward value: 701.50 +/- 210.88 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga matthh -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga matthh -f logs/ rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga matthh ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
AnonymousSub/EManuals_BERT_copy
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
2022-12-29T01:05:43Z
--- license: creativeml-openrail-m tags: - pytorch - diffusers - stable-diffusion - text-to-image - diffusion-models-class - dreambooth-hackathon - wildcard widget: - text: a photo of ramrick character as a pickle --- # DreamBooth model for the ramrick concept trained by Kayvane on the Kayvane/dreambooth-hackathon-rick-and-morty-images-square dataset. Notes: - trained on square images, 20k steps on google colab - character name is ramrick, many pictures get blocked as nsfw - possibly because the subtoken #ick is close to something else - model is trained for too many steps / overfitted as it is effectively recreating the input images This is a Stable Diffusion model fine-tuned on the ramrick concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of ramrick character** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! ## Description This is a Stable Diffusion model fine-tuned on `character` images for the wildcard theme. ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('Kayvane/rick-and-morty-ramrick-character') image = pipeline().images[0] image ```
AnonymousSub/SR_EManuals-BERT
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('renee127/sd-class-butterflies-64') image = pipeline().images[0] image ```
AnonymousSub/SR_EManuals-RoBERTa
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 253.03 +/- 22.81 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AnonymousSub/SR_SDR_HF_model_base
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: thiagoms7/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
AnonymousSub/SR_bert-base-uncased
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: t5-small-finetuned-en-to-es results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-en-to-es This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8937 - Bleu: 7.4133 - Gen Len: 15.9653 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | 2.27 | 1.0 | 7061 | 1.8937 | 7.4133 | 15.9653 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
AnonymousSub/SR_rule_based_hier_quadruplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- tags: - generated_from_keras_callback model-index: - name: 001_M-BERT-claim-classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # この「クレーム判別」モデルの使い方 このモデルは、当該クレームがどのプロダクトについてのものかを判別します。右側の"Hosted inference API" 直下のBOXにお好きなクレームを入力し、"Compute"ボタンをクリックして下さい。30〜60秒前後で答えが返ってきます。答えは以下の5つのlabelのいずれかです。 入力は最大300〜400文字、日本語・英語他100カ国語以上に対応してます。教育用ですので、機密情報は入力しないで下さい。 # 5つのlabelの定義: - LABEL_0 : Bank account or service (銀行口座・サービス) - LABEL_1 : Checking or savings account(当座預金・普通預金) - LABEL_2 : Consumer Loan(消費者ローン) - LABEL_3 : Credit card(クレジットカード) - LABEL_4 : Mortgage(住宅ローン) ### 入力例  (正解 4: 'Mortgage') 「私の夫と私はローンデポを通じてローンのRefiローンを申請し、その後承認されました。私たちはそれが次週になると言われているプロセスで前進することにしました。 両当事者と鑑定評価で実行されました。私たちの料金は次の週に有効であることが開示されました。貸出金の担当者によって連絡しました。ローンが45日以内に閉じなかった場合、当社の元のレートオファーが拡張されます。ロックレートが期限切れになったことを示すXXXXから別のコミュニケーション(Eメール)を受け取り、新しい料金を取得する必要があります。」 # How to use this "claim classification" model This model determines which product the claim is about. Enter your favorite claim in the box directly under "Hosted inference API" on the right, and click the "Compute" button. You will receive an answer within 30-60 seconds. The answer is one of the following five labels. You can enter a maximum of 400 to 500 characters, and it supports Japanese, English, and more than 100 languages. # Warning This is for educational purposes only, please do not enter confidential information. # Definition of 5 labels - LABEL_0 : Bank account or service - LABEL_1 : Checking or savings account - LABEL_2 : Consumer Loan - LABEL_3 : Credit card - LABEL_4 : Mortgage # Input example (correct answer 4: 'Mortgage') I drew an advance of $2900.00 from my HELOC that I have with Wells Fargo. I have auto pay with Wells Fargo and a scheduled payment of $100.00 was taken from mychecking ac. I entered the bank, CA branch and paid $2800.00 to pay off the advance. Since that time, various transactions have been posted to this account. There is a principal adj debit in the amount of $170.00 followed by a principal reversal in the amount of $91. # 001_M-BERT-claim-classifier It achieves the following results on the evaluation set: ## Model description bert-base-multilingual-cased ## Intended uses & limitations This is solely for educational purposes. This cannot be used for investments or businesses in practice. I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on me to correct any errors or defects in the codes and the software. ## Training and evaluation data ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.25.1 - TensorFlow 2.9.2 - Datasets 2.8.0 - Tokenizers 0.13.2
AnonymousSub/SR_rule_based_only_classfn_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2139 - Accuracy: 0.9285 - F1: 0.9285 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8138 | 1.0 | 250 | 0.3186 | 0.9005 | 0.8957 | | 0.2426 | 2.0 | 500 | 0.2139 | 0.9285 | 0.9285 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
AnonymousSub/SR_rule_based_only_classfn_twostage_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - opus_books metrics: - bleu model-index: - name: my_awesome_opus_books_model results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: opus_books type: opus_books config: en-es split: train args: en-es metrics: - name: Bleu type: bleu value: 0.5036 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_opus_books_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books dataset. It achieves the following results on the evaluation set: - Loss: 2.4340 - Bleu: 0.5036 - Gen Len: 18.1951 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | 2.7976 | 1.0 | 4674 | 2.4859 | 0.4505 | 18.203 | | 2.7472 | 2.0 | 9348 | 2.4340 | 0.5036 | 18.1951 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
AnonymousSub/SR_rule_based_roberta_bert_triplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
Access to model Ralf-ca/my-sentiment-model is restricted and you are not in the authorized list. Visit https://huggingface.co/Ralf-ca/my-sentiment-model to ask for access.
AnonymousSub/SR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: cc-by-nc-sa-4.0 language: - en thumbnail: "https://huggingface.co/GeneralAwareness/VintagePhotos/resolve/main/00122-2365281862-color%20photo%20emma%20stone%20in%20the%20style%20of%20Vint.png" tags: - stable-diffusion - v2 - text-to-image - image-to-image - Embedding --- Textual Inversion Embedding by General Awareness For SD 2.x trained on 768x768 images from various sources. Install by downloading the .pt embedding, and put it in the \embeddings folder. The two embeddings are a one two punch as Vint-3000 is more 1880s style of photography (some seeds will be different) while the Vint is more for 1940s onward though both can be used for anything you can dream of. Use keyword: vint, or vint-3000 depending on the embedding, and effect you are trying to achieve. color photo morgan freeman in the style of Vint-3000 ![Single Samples](https://huggingface.co/GeneralAwareness/VintagePhotos/resolve/main/00120-2365281862-color%20photo%20morgan%20freeman%20in%20the%20style%20of%20Vint-3000.png) color photo morgan freeman in the style of Vint ![Single_Samples](https://huggingface.co/GeneralAwareness/VintagePhotos/resolve/main/00121-2365281862-color%20photo%20morgan%20freeman%20in%20the%20style%20of%20Vint.png) color photo emma stone in the style of Vint ![Single_Samples](https://huggingface.co/GeneralAwareness/VintagePhotos/resolve/main/00122-2365281862-color%20photo%20emma%20stone%20in%20the%20style%20of%20Vint.png) color photo emma stone in the style of Vint-3000 ![Single_Samples](https://huggingface.co/GeneralAwareness/VintagePhotos/resolve/main/00124-345136640-color%20photo%20emma%20stone%20in%20the%20style%20of%20Vint-3000.png) color photo post apocalyptic city in the style of Vint-3000 ![Single_Samples](https://huggingface.co/GeneralAwareness/VintagePhotos/resolve/main/00127-345136640-color%20photo%20post%20apocalyptic%20city%20in%20the%20style%20of%20Vint-3000.png) color photo post apocalyptic city in the style of Vint ![Single_Samples](https://huggingface.co/GeneralAwareness/VintagePhotos/resolve/main/00128-345136640-color%20photo%20post%20apocalyptic%20city%20in%20the%20style%20of%20Vint.png)
AnonymousSub/SR_rule_based_roberta_twostagetriplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 --- # Erlangshen-BERT-120M-IE-Chinese * Github: [GTS-Engine](https://github.com/IDEA-CCNL/GTS-Engine) * Documentation: [GTS-Engine](https://gts-engine-doc.readthedocs.io/en/latest/docs/quick_start.html) ## 简介 Brief Introduction 本模型基于大规模信息抽取数据进行预训练,可支持few-shot、zero-shot场景下的实体识别、关系三元组抽取任务。 This model is pre-trained on large-scale information extraction data, to better support Named Entity Recognition (NER) and Relation Extraction (RE) tasks in few-shot/zero-shot scenarios. ## 模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | ---------- | ---------- | -------------- | -------- | ------------ | -------- | | 通用 General | 信息抽取 Information Extraction | 二郎神 Erlangshen | BagualuIEModel | 120M | Chinese | ## 下游效果 Performance Erlangshen-BERT-120M-IE-Chinese在多个信息抽取任务下进行测试。 其中,zh_weibo/MSRA/OntoNote4/Resume为NER任务,其中MSRA在原始数据下进行测试;SanWen/FinRE作为实体关系联合抽取任务进行测试,非单一关系分类任务。 部分参数设置如下: ``` batch_size=16 precision=16 max_epoch=50 lr=2e-5 weight_decay=0.1 warmup=0.06 max_length=512 ``` 我们分别在随机种子123/456/789下进行测试,并以[MacBERT-base, Chinese](https://github.com/ymcui/MacBERT)作为预训练模型保持相同参数进行训练作为对比baseline,得到效果计算平均,效果如下: | Dataset | Training epochs | Test precision | Test recall | Test f1 | Baseline f1 | | --------- | --------------- | -------------- | ----------- | ------- | ----------- | | zh_weibo | 10.3 | 0.7282 | 0.6447 | 0.6839 | 0.6778 | | MSRA | 5 | 0.9374 | 0.9299 | 0.9336 | 0.8483 | | OntoNote4 | 9 | 0.8640 | 0.8634 | 0.8636 | 0.7996 | | Resume | 15 | 0.9568 | 0.9658 | 0.9613 | 0.9479 | | SanWen | 6.7 | 0.3655 | 0.2072 | 0.2639 | 0.2655 | | FinRE | 7 | 0.5190 | 0.4274 | 0.4685 | 0.4559 | ## 使用 Usage GTS引擎(GTS-Engine)是一款开箱即用且性能强大的自然语言理解引擎,能够仅用小样本就能自动化生产NLP模型。GTS Engine包含两个训练引擎:乾坤鼎和八卦炉。 本模型为可在GTS-Engine八卦炉引擎信息抽取任务中,作为预训练模型进行finetune。 GTS-Engine文档参考:[GTS-Engine](https://gts-engine-doc.readthedocs.io/en/latest/docs/about.html) ## 引用 如果您在您的工作中使用了我们的模型,可以引用我们的[网站](https://github.com/IDEA-CCNL/GTS-Engine): You can also cite our website: ``` @misc{GTS-Engine, title={GTS-Engine}, author={IDEA-CCNL}, year={2022}, howpublished={\url{https://github.com/IDEA-CCNL/GTS-Engine}}, } ```
AnonymousSub/bert_mean_diff_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- language: en license: mit tags: - exbert datasets: - squad_v2 thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg model-index: - name: Shobhank-iiitdwd/DistBERT-squad2-QA-768d results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - type: exact_match value: 73.8248 name: Exact Match verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGFmZmFiN2E5ODZkOTkyMjQ1NTUzMmQwMjc0M2RlYzVlNmM4YTFlNzA4YzIwY2JkY2EyNDg2ZTY3OTdjZTVlZiIsInZlcnNpb24iOjF9.ZZ6c2OI3lzeNhuSWTh28j00zk-sPrqkTvdVBZv2wJc1D4YnR-xOj72haybT6MV_xeYqTg3-x9L8PsWSS20NaDw - type: f1 value: 77.1684 name: F1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzAxMDk1YzI5ZjA2N2ZmMzAxNjgxYzJiNzAzYmI1ZWU5ZDRmYWY3OWJmMjlmNDcyMGE0YWY5NjNhZTk4YWY5ZSIsInZlcnNpb24iOjF9.rF3raNGUSYv5D2xzWLZztD99vwDKvWb22LG32RomrDGP6XKTbCVqZzAw5UFw93jKb0VoLApbQQ-AOGxLj3U_Cg --- ## Overview **Language model:** Shobhank-iiitdwd/DistBERT-squad2-QA **Language:** English **Training data:** SQuAD 2.0 training set x 20 augmented + SQuAD 2.0 training set without augmentation **Eval data:** SQuAD 2.0 dev set **Infrastructure**: 1x V100 GPU **Published**: Dec 8th, 2021 ## Details - haystack's intermediate layer and prediction layer distillation features were used for training. bert-base-uncased-squad2 was used as the teacher model and DBERT_General_6L_768D was used as the student model. ## Hyperparameters ### Intermediate layer distillation ``` batch_size = 26 n_epochs = 5 max_seq_len = 384 learning_rate = 5e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 1 ``` ### Prediction layer distillation ``` batch_size = 26 n_epochs = 5 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 1 distillation_loss_weight = 1.0 ``` ## Performance ``` "exact": 71.87736882001179 "f1": 76.36111895973675 ```
AnonymousSub/consert-techqa
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: other --- model A(*0.48) + model B(*0.32) + model C(*0.20)<br> anime model<br> A detailed bird's-eye view of the city, a variety of poses with such a detailed background and clothing patterns, angle variations, underwear in every position on the bed, nudity, and a custom model that is too perfect to do anything!<br> <br>sample1.png<br> size:1280*768 Step:50 CfgScale:7 DPM++2M highres.fix:OFF<br> <br> recommend prompt add 1 (extremely detailed CG unity 8k wallpaper)<br> recommend prompt add 2 (sparkle, light rays, lens flare, light particles)<br> recommend prompt add 3 (hyper detailed, exquisite detail)<br> recommend prompt add 4 (cinematic lighting, sharp focus, bokeh highlight, cinematic postprocessing)<br> highres.fix<br> OFF:Increase the priority of landscape rendering , auto wide angle<br> ON:Prioritize portraiture<br> recommended to use WaifuDiffusion v1.3 VAE<br> 2023.1.24<br> v2.0(=v1.2) = v1.0(*0.80) + model D(*0.20)<br> :improvement of facial expressions<br> :Improved instability of mouth rendering<br> :Improved weak eye highlights<br> :Enhance the beauty of the depiction of all parts of a woman<br> :rich composition<br> :It is recommended to set hires.fix=ON for both background rendering and portrait rendering.<br> :Depiction of limbs may be slightly unstable<br> :Beautiful indoor background that looks like a real photo<br> <br>sample2.png<br> size:640*384->1280*768 Step:50 CfgScale:7 DPM++2M hires.fix:ON Latent upscale:*2 step:50 strength:0.75<br> <br> prohibit commercial use<br>
AnonymousSub/declutr-emanuals-techqa
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 language: - en - zh datasets: - mnist metrics: - accuracy tags: - classification --- MNIST ====== ## Intro hand-written digital recognition
AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_1_wikiqa_copy
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.74 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="orenk/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AnonymousSub/rule_based_roberta_only_classfn_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 265.63 +/- 31.51 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from huggingface_sb3 import load_from_hub checkpoint = load_from_hub( repo_id="ali-issa/ppo-lunarLander-v2", filename="Landing_lunar_AI.zip", ) ... ```
AnonymousSub/rule_based_twostagetriplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 272.32 +/- 15.23 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AnonymousSub/rule_based_twostagetriplet_epochs_1_shard_1_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- tags: - generated_from_trainer model-index: - name: test_trainer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_trainer This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.9501 - eval_accuracy: 0.491 - eval_runtime: 64.4098 - eval_samples_per_second: 31.051 - eval_steps_per_second: 3.881 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
AnonymousSub/rule_based_twostagetriplet_hier_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- language: - hi metrics: - wer tags: - ASR - Speech Recognition - Hindi ---
Anthos23/my-awesome-model
[ "pytorch", "tf", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
2022-12-29T13:10:57Z
--- license: openrail library_name: diffusers tags: - TPU - JAX - Flax - stable-diffusion - text-to-image - text-to-audio language: - en ---
Anthos23/sentiment-roberta-large-english-finetuned-sentiment-analysis
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-12-29T13:12:24Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3_0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="CyantifiCQ/Taxi-v3_0", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Antony/mint_model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-12-29T13:21:07Z
--- tags: - autotrain - vision - image-classification datasets: - ivensamdh/autotrain-data-age3 widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace co2_eq_emissions: emissions: 5.065800795931951 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 2658279907 - CO2 Emissions (in grams): 5.0658 ## Validation Metrics - Loss: 0.895 - Accuracy: 0.770 - Macro F1: 0.768 - Micro F1: 0.770 - Weighted F1: 0.768 - Macro Precision: 0.773 - Micro Precision: 0.770 - Weighted Precision: 0.773 - Macro Recall: 0.770 - Micro Recall: 0.770 - Weighted Recall: 0.770
Anubhav23/indianlegal
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-12-29T13:21:20Z
--- license: creativeml-openrail-m tags: - pytorch - diffusers - stable-diffusion - text-to-image - diffusion-models-class - dreambooth-hackathon - food widget: - text: a photo of jirostyle ramen noodles in the park --- # DreamBooth model for the jirostyle concept trained by Prgckwb on the Prgckwb/jiro-style-ramen dataset. This is a Stable Diffusion model fine-tuned on the jirostyle concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of jirostyle ramen noodles** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! ## Description This is a Stable Diffusion model fine-tuned on `ramen noodles` images for the food theme. ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('Prgckwb/JiroStyle-ramen') prompt = "a photo of jirostyle ramen noodles" image = pipeline(prompt).images[0] ``` ## Generated Images **a watercolor painting of jirostyle ramen noodles** ![](a_watercolor_painting_of_jirostyle_ramen_noodles.png) **a photo of a dog eating jirostyle ramen noodles in the park** ![](a_photo_of_a_dog_eating_jirostyle_ramen_noodles_in_the_park.png) **a photo of jirostyle ramen noodles on the table** ![](a_photo_of_jirostyle_ramen_noodles_on_the_table.png)
Anupam/QuestionClassifier
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-12-29T13:24:29Z
--- language: - en license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Description Anything Elynia Diffusion is a latent text-to-image diffusion model based on Anything 3.0 and then fine-tuned on the main character of 'Battle for Wesnoth' add-ons using Dreambooth. This model has been created to explore the possibilities and limitations of Dreambooth training and to study how it learns when low-resolution pixelart videogame sprites are added to the dataset in addition to realistic artwork. To use this style in your generations, add `1girl, elynia` to the prompts. Dreambooth hyperparameters ```sh export MODEL_NAME="/home/{USER}/kml/models1/" export INSTANCE_DIR="/home/{USER}/kml/datasets/objects/elyniaA" export CLASS_DIR="/home/{USER}/kml/datasets/objects/elyniaX" export OUTPUT_DIR="/home/{USER}/kml/models2/" accelerate launch train_dreambooth.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --instance_data_dir=$INSTANCE_DIR \ --class_data_dir=$CLASS_DIR \ --output_dir=$OUTPUT_DIR \ --with_prior_preservation --prior_loss_weight=1.0 \ --instance_prompt="1girl, elynia" \ --class_prompt="1girl, faerie girl, very long hair, pink hair, yellow eyes, detailed green dress, brown skirt, detached long sleeves, translucent green blue diamond-shaped butterfly wings" \ --resolution=512 \ --train_batch_size=1 \ --gradient_accumulation_steps=1 \ --learning_rate=1e-6 \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --num_class_images=200 \ --max_train_steps=800 \ --mixed_precision 'no' \ --train_text_encoder ``` Model Description License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: You can't use the model to deliberately produce nor share illegal or harmful outputs or content The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license https://huggingface.co/stabilityai/stable-diffusion-2 Downstream Uses This model can be used for entertainment purposes and as a generative art assistant. Acknowledgements This project would not have been possible without the incredible work by the CompVis Researchers, Wesnoth devs, artists and user made content makers. The dataset for training currently resides here https://drive.google.com/drive/folders/1-sa5eQ9ZgoW0hGu-jYhzIJs0qPpzeFLg?usp=share_link.
Apisate/DialoGPT-small-jordan
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: it datasets: - lmqg/qg_itquad pipeline_tag: text2text-generation tags: - answer extraction widget: - text: "<hl> Il 6 ottobre 1973 , la Siria e l' Egitto, con il sostegno di altre nazioni arabe, lanciarono un attacco a sorpresa su Israele, su Yom Kippur. <hl> Questo rinnovo delle ostilità nel conflitto arabo-israeliano ha liberato la pressione economica sottostante sui prezzi del petrolio. All' epoca, l' Iran era il secondo esportatore mondiale di petrolio e un vicino alleato degli Stati Uniti. Settimane più tardi, lo scià d' Iran ha detto in un' intervista: Naturalmente[il prezzo del petrolio] sta andando a salire Certamente! E come! Avete[Paesi occidentali] aumentato il prezzo del grano che ci vendete del 300 per cento, e lo stesso per zucchero e cemento." example_title: "Answering Extraction Example 1" - text: "<hl> Furono introdotti autocarri compatti, come la Toyota Hilux e il Datsun Truck, seguiti dal camion Mazda (venduto come il Ford Courier), e l' Isuzu costruito Chevrolet LUV. <hl> Mitsubishi rebranded il suo Forte come Dodge D-50 pochi anni dopo la crisi petrolifera. Mazda, Mitsubishi e Isuzu avevano partnership congiunte rispettivamente con Ford, Chrysler e GM. In seguito i produttori americani introdussero le loro sostituzioni nazionali (Ford Ranger, Dodge Dakota e la Chevrolet S10/GMC S-15), ponendo fine alla loro politica di importazione vincolata." example_title: "Answering Extraction Example 2" model-index: - name: lmqg/mt5-small-itquad-ae results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_itquad type: default args: default metrics: - name: BLEU4 (Answer Extraction) type: bleu4_answer_extraction value: 24.72 - name: ROUGE-L (Answer Extraction) type: rouge_l_answer_extraction value: 43.93 - name: METEOR (Answer Extraction) type: meteor_answer_extraction value: 40.39 - name: BERTScore (Answer Extraction) type: bertscore_answer_extraction value: 90.01 - name: MoverScore (Answer Extraction) type: moverscore_answer_extraction value: 80.28 - name: AnswerF1Score (Answer Extraction) type: answer_f1_score__answer_extraction value: 70.41 - name: AnswerExactMatch (Answer Extraction) type: answer_exact_match_answer_extraction value: 55.07 --- # Model Card of `lmqg/mt5-small-itquad-ae` This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for answer extraction on the [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small) - **Language:** it - **Training data:** [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="it", model="lmqg/mt5-small-itquad-ae") # model prediction answers = model.generate_a("Dopo il 1971 , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-small-itquad-ae") output = pipe("<hl> Il 6 ottobre 1973 , la Siria e l' Egitto, con il sostegno di altre nazioni arabe, lanciarono un attacco a sorpresa su Israele, su Yom Kippur. <hl> Questo rinnovo delle ostilità nel conflitto arabo-israeliano ha liberato la pressione economica sottostante sui prezzi del petrolio. All' epoca, l' Iran era il secondo esportatore mondiale di petrolio e un vicino alleato degli Stati Uniti. Settimane più tardi, lo scià d' Iran ha detto in un' intervista: Naturalmente[il prezzo del petrolio] sta andando a salire Certamente! E come! Avete[Paesi occidentali] aumentato il prezzo del grano che ci vendete del 300 per cento, e lo stesso per zucchero e cemento.") ``` ## Evaluation - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-itquad-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_itquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 55.07 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | AnswerF1Score | 70.41 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | BERTScore | 90.01 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_1 | 38.56 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_2 | 32.74 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_3 | 28.58 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_4 | 24.72 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | METEOR | 40.39 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | MoverScore | 80.28 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | ROUGE_L | 43.93 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_itquad - dataset_name: default - input_types: ['paragraph_sentence'] - output_types: ['answer'] - prefix_types: None - model: google/mt5-small - max_length: 512 - max_length_output: 32 - epoch: 17 - batch: 32 - lr: 0.0005 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-itquad-ae/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
ArBert/roberta-base-finetuned-ner-gmm-twitter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-12-29T14:11:28Z
--- tags: - generated_from_keras_callback model-index: - name: ru_propaganda_model_with_foreign_agent_mask results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ru_propaganda_model_with_foreign_agent_mask This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0659 - Validation Loss: 0.3494 - Train Accuracy: 0.9 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 370, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.6139 | 0.3611 | 0.86 | 0 | | 0.2522 | 0.3060 | 0.86 | 1 | | 0.1836 | 0.2757 | 0.92 | 2 | | 0.1020 | 0.3094 | 0.92 | 3 | | 0.0659 | 0.3494 | 0.9 | 4 | ### Framework versions - Transformers 4.25.1 - TensorFlow 2.9.2 - Datasets 2.8.0 - Tokenizers 0.13.2
ArBert/roberta-base-finetuned-ner-gmm
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-12-29T14:13:01Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 15.03 +/- 87.72 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. To learn to code your own PPO agent and train it Unit 8 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit8 # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': False 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': 'KeWangRL' 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 1000000 'learning_rate': 0.00035 'num_envs': 8 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.015 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': 0.015 'repo_id': 'kewangRL/LunarLander-v2' 'batch_size': 1024 'minibatch_size': 256} ```
ArBert/roberta-base-finetuned-ner-kmeans-twitter
[ "pytorch", "tensorboard", "roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2022-12-29T14:20:24Z
--- license: openrail library_name: diffusers tags: - TPU - JAX - Flax - stable-diffusion - text-to-image - text-to-audio language: - en ---
ArBert/roberta-base-finetuned-ner
[ "pytorch", "tensorboard", "roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
2022-12-29T14:25:19Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - opus_books metrics: - bleu model-index: - name: my_awesome_opus_books_model_mt5 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: opus_books type: opus_books config: en-es split: train args: en-es metrics: - name: Bleu type: bleu value: 0.0004 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_opus_books_model_mt5 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the opus_books dataset. It achieves the following results on the evaluation set: - Loss: nan - Bleu: 0.0004 - Gen Len: 2.3199 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | 0.0 | 1.0 | 4674 | nan | 0.0004 | 2.3199 | | 0.0 | 2.0 | 9348 | nan | 0.0004 | 2.3199 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2