modelId
stringlengths
4
81
tags
sequence
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
Carlork314/Carlos
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-06-02T19:37:51Z
--- library_name: stable-baselines3 tags: - PandaPush-v1 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: TQC results: - metrics: - type: mean_reward value: -7.00 +/- 1.79 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaPush-v1 type: PandaPush-v1 --- # **TQC** Agent playing **PandaPush-v1** This is a trained model of a **TQC** agent playing **PandaPush-v1** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo tqc --env PandaPush-v1 -orga sb3 -f logs/ python enjoy.py --algo tqc --env PandaPush-v1 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo tqc --env PandaPush-v1 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo tqc --env PandaPush-v1 -f logs/ -orga sb3 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 2048), ('buffer_size', 1000000), ('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'), ('gamma', 0.95), ('learning_rate', 0.001), ('n_timesteps', 1000000.0), ('policy', 'MultiInputPolicy'), ('policy_kwargs', 'dict(net_arch=[512, 512, 512], n_critics=2)'), ('replay_buffer_class', 'HerReplayBuffer'), ('replay_buffer_kwargs', "dict( online_sampling=True, goal_selection_strategy='future', " 'n_sampled_goal=4, )'), ('tau', 0.05), ('normalize', False)]) ```
dccuchile/albert-xlarge-spanish-finetuned-pawsx
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
24
null
--- language: en thumbnail: http://www.huggingtweets.com/mrikasper/1654206041092/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/914206875419332608/26FrQMV2_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Lars Kasper</div> <div style="text-align: center; font-size: 14px;">@mrikasper</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Lars Kasper. | Data | Lars Kasper | | --- | --- | | Tweets downloaded | 475 | | Retweets | 113 | | Short tweets | 10 | | Tweets kept | 352 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3lbnyiin/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mrikasper's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1y754vcz) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1y754vcz/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/mrikasper') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Chan/distilroberta-base-finetuned-wikitext2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-06-03T07:55:51Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Cheapestmedsshop/Buymodafinilus
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-06-03T08:51:01Z
--- language: en thumbnail: http://www.huggingtweets.com/mundodeportivo/1654247301367/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1277369340275437570/R-AXlYNT_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Mundo Deportivo</div> <div style="text-align: center; font-size: 14px;">@mundodeportivo</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Mundo Deportivo. | Data | Mundo Deportivo | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 195 | | Short tweets | 26 | | Tweets kept | 3029 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/17m7lnrt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mundodeportivo's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2mndpk3u) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2mndpk3u/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/mundodeportivo') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Cheatham/xlm-roberta-large-finetuned-d1
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
--- language: de license: cc-by-nc-sa-4.0 --- ## jobGBERT This is a domain-adapted transformer-based language model for German-speaking job advertisements. Is is based on [deepset/gbert-base](https://huggingface.co/deepset/gbert-base), and adapted to the domain of job advertisements trough continued in-domain pretraining on 4 million German-speaking job ads from Switzerland 1990-2020 (5.9 GB data). ### Overview **Architecture:** BERT base <br> **Language:** German <br> **Domain:** Job advertisements <br> **See also:** [agne/jobBERT-de](https://huggingface.co/agne/jobBERT-de) ### License Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (cc-by-nc-sa-4.0) Please use the following citation when using our model: ```bibtex @inproceedings{ title = "Evaluation of Transfer Learning and Domain Adaptation for Analyzing German-Speaking Job Advertisements", author = "Gnehm, Ann-Sophie and Bühlmann, Eva and Clematide, Simon", booktitle = "Proceedings of the 13th Language Resources and Evaluation Conference", month = june, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", } ``` ### Intended usage and limitations You can use the model for masked language modeling, but it's intended to be fine-tuned on a downstream task. The model is trained on German-Speaking job ads from Switzerland. It inherits potential bias of its base model, and may contain biases and stereotypes common in job advertisements. ### About us Ann-Sophie Gnehm: `gnehm [at] soziologie.uzh.ch` <br> Eva Bühlmann: `bühlmann [at] soziologie.uzh.ch` <br> Simon Clematide: `simon.clematide [at] cl.uzh.ch` <br> The [Swiss Job Market Monitor](https://www.stellenmarktmonitor.uzh.ch/en.html) aims at systematically expanding scientific knowledge about the job market and improving labour market transparency by informing the general public about current developments on the job market. **Get in touch:** [Mail](mailto:[email protected]) [Website](https://www.stellenmarktmonitor.uzh.ch/en.html) [Zenodo](https://doi.org/10.5281/zenodo.6497853) [SWISSUbase](https://www.swissubase.ch/de/catalogue/studies/11998/18157/overview)
Cheatham/xlm-roberta-large-finetuned-d12_2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # kimcando/ko-paraKQC-demo2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('kimcando/ko-paraKQC-demo2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('kimcando/ko-paraKQC-demo2') model = AutoModel.from_pretrained('kimcando/ko-paraKQC-demo2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=kimcando/ko-paraKQC-demo2) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 475 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 4, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 190, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Cheatham/xlm-roberta-large-finetuned-d1r01
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
21
null
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: ja datasets: - lmqg/qg_jaquad pipeline_tag: text2text-generation tags: - question generation widget: - text: "ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている<hl>6月28日<hl>は2人の14回目の結婚記念日であった。" example_title: "Question Generation Example 1" - text: "『クマのプーさん』の物語はまず1925年12月24日、『イヴニング・ニュース』紙のクリスマス特集号に短編作品として掲載された。これは『クマのプーさん』の第一章にあたる作品で、このときだけは挿絵をJ.H.ダウドがつけている。その後作品10話と挿絵が整い、刊行に先駆けて「イーヨーの誕生日」のエピソードが1926年8月に『ロイヤルマガジン』に、同年10月9日に『ニューヨーク・イヴニング・ポスト』紙に掲載されたあと、同年10月14日にロンドンで(メシュエン社)、21日にニューヨークで(ダットン社)『クマのプーさん』が刊行された。前著『ぼくたちがとてもちいさかったころ』がすでに大きな成功を収めていたこともあり、イギリスでは初版は前著の7倍に当たる<hl>3万5000部<hl>が刷られた。他方のアメリカでもその年の終わりまでに15万部を売り上げている。ただし依然として人気のあった前著を売り上げで追い越すには数年の時間を要した。" example_title: "Question Generation Example 2" - text: "フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め<hl>30数点<hl>しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。以下には若干の疑問作も含め、37点の基本情報を記載し、各作品について略説する。収録順序、推定制作年代は『「フェルメールとその時代展」図録』による。日本語の作品タイトルについては、上掲図録のほか、『「フェルメール展」図録』、『フェルメール生涯と作品』による。便宜上「1650年代の作品」「1660年代の作品」「1670年代の作品」の3つの節を設けたが、フェルメールの作品には制作年代不明のものが多く、推定制作年代については研究者や文献によって若干の差がある。" example_title: "Question Generation Example 3" model-index: - name: lmqg/mbart-large-cc25-jaquad-qg results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_jaquad type: default args: default metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 32.16 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 52.95 - name: METEOR (Question Generation) type: meteor_question_generation value: 29.97 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 82.26 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 59.88 - name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 87.16 - name: QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 87.16 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 87.17 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 63.08 - name: QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 63.06 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 63.1 - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer value: 76.7 - name: QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_recall_bertscore_question_answer_generation_gold_answer value: 78.87 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_precision_bertscore_question_answer_generation_gold_answer value: 74.75 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer value: 53.64 - name: QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_recall_moverscore_question_answer_generation_gold_answer value: 55.14 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_precision_moverscore_question_answer_generation_gold_answer value: 52.32 --- # Model Card of `lmqg/mbart-large-cc25-jaquad-qg` This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for question generation task on the [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) - **Language:** ja - **Training data:** [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="ja", model="lmqg/mbart-large-cc25-jaquad-qg") # model prediction questions = model.generate_q(list_context="フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め30数点しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。", list_answer="30数点") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-jaquad-qg") output = pipe("ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている<hl>6月28日<hl>は2人の14回目の結婚記念日であった。") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-jaquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_jaquad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 82.26 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | Bleu_1 | 57.05 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | Bleu_2 | 45.45 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | Bleu_3 | 37.81 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | Bleu_4 | 32.16 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | METEOR | 29.97 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | MoverScore | 59.88 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | ROUGE_L | 52.95 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | - ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-jaquad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_jaquad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 87.16 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | QAAlignedF1Score (MoverScore) | 63.08 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | QAAlignedPrecision (BERTScore) | 87.17 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | QAAlignedPrecision (MoverScore) | 63.1 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | QAAlignedRecall (BERTScore) | 87.16 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | QAAlignedRecall (MoverScore) | 63.06 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/mbart-large-cc25-jaquad-ae`](https://huggingface.co/lmqg/mbart-large-cc25-jaquad-ae). [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-jaquad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_jaquad.default.lmqg_mbart-large-cc25-jaquad-ae.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 76.7 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | QAAlignedF1Score (MoverScore) | 53.64 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | QAAlignedPrecision (BERTScore) | 74.75 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | QAAlignedPrecision (MoverScore) | 52.32 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | QAAlignedRecall (BERTScore) | 78.87 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | QAAlignedRecall (MoverScore) | 55.14 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_jaquad - dataset_name: default - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: facebook/mbart-large-cc25 - max_length: 512 - max_length_output: 32 - epoch: 12 - batch: 64 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 1 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-jaquad-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
Chun/DialoGPT-medium-dailydialog
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="sinhprous/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Chun/w-zh2en-mtm
[ "pytorch", "mbart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MBartForConditionalGeneration" ], "model_type": "mbart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: - ar tags: - Arabic T5 - T5 - MSA - Arabic Text Summarization - Arabic News Title Generation - Arabic Paraphrasing widget: - text: "شهدت مدينة طرابلس، مساء أمس الأربعاء، احتجاجات شعبية وأعمال شغب لليوم الثالث على التوالي، وذلك بسبب تردي الوضع المعيشي والاقتصادي. واندلعت مواجهات عنيفة وعمليات كر وفر ما بين الجيش اللبناني والمحتجين استمرت لساعات، إثر محاولة فتح الطرقات المقطوعة، ما أدى إلى إصابة العشرات من الطرفين." --- # An Arabic abstractive text summarization model A fine-tuned AraT5 model on a dataset of 84,764 paragraph-summary pairs. More details on the fine-tuning of this model will be released later. The model can be used as follows: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline from arabert.preprocess import ArabertPreprocessor model_name="malmarjeh/t5-arabic-text-summarization" preprocessor = ArabertPreprocessor(model_name="") tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) pipeline = pipeline("text2text-generation",model=model,tokenizer=tokenizer) text = "شهدت مدينة طرابلس، مساء أمس الأربعاء، احتجاجات شعبية وأعمال شغب لليوم الثالث على التوالي، وذلك بسبب تردي الوضع المعيشي والاقتصادي. واندلعت مواجهات عنيفة وعمليات كر وفر ما بين الجيش اللبناني والمحتجين استمرت لساعات، إثر محاولة فتح الطرقات المقطوعة، ما أدى إلى إصابة العشرات من الطرفين." text = preprocessor.preprocess(text) result = pipeline(text, pad_token_id=tokenizer.eos_token_id, num_beams=3, repetition_penalty=3.0, max_length=200, length_penalty=1.0, no_repeat_ngram_size = 3)[0]['generated_text'] result >>> 'مواجهات عنيفة بين الجيش اللبناني ومحتجين في طرابلس' ``` ## Contact: <[email protected]>
CoderEFE/DialoGPT-marxbot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational", "has_space" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- library_name: stable-baselines3 tags: - donkey-avc-sparkfun-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: TQC results: - metrics: - type: mean_reward value: 552.57 +/- 285.35 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: donkey-avc-sparkfun-v0 type: donkey-avc-sparkfun-v0 --- # **TQC** Agent playing **donkey-avc-sparkfun-v0** This is a trained model of a **TQC** agent playing **donkey-avc-sparkfun-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Autoencoder: https://github.com/araffin/aae-train-donkeycar branch: `feat/race_june` <br/> Gym env: https://github.com/araffin/gym-donkeycar-1 branch: `feat/race_june` <br/> RL Zoo branch: `feat/gym-donkeycar` **Pretrained autoencoder** can be downloaded here: https://github.com/araffin/aae-train-donkeycar/releases/download/live-twitch-2/ae-32_avc.pkl ``` # Export path to autoencoder export AE_PATH=/path/to/ae-32_avc.pkl # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo tqc --env donkey-avc-sparkfun-v0 -orga araffin -f logs/ python enjoy.py --algo tqc --env donkey-avc-sparkfun-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo tqc --env donkey-avc-sparkfun-v0 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo tqc --env donkey-avc-sparkfun-v0 -f logs/ -orga araffin ``` ## Hyperparameters ```python OrderedDict([('batch_size', 256), ('buffer_size', 200000), ('callback', [{'rl_zoo3.callbacks.ParallelTrainCallback': {'gradient_steps': 200}}, 'rl_zoo3.callbacks.LapTimeCallback']), ('ent_coef', 'auto'), ('env_wrapper', ['ae.wrapper.AutoencoderWrapper', {'rl_zoo3.wrappers.HistoryWrapper': {'horizon': 2}}]), ('gamma', 0.99), ('gradient_steps', 256), ('learning_rate', 0.00073), ('learning_starts', 500), ('n_timesteps', 2000000.0), ('normalize', "{'norm_obs': True, 'norm_reward': False}"), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(log_std_init=-3, net_arch=[256, 256], n_critics=2, ' 'use_expln=True)'), ('sde_sample_freq', 16), ('tau', 0.02), ('train_freq', 200), ('use_sde', True), ('use_sde_at_warmup', True), ('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})]) ``` # Environment Arguments ```python {'conf': {'cam_resolution': (120, 160, 3), 'car_config': {'body_rgb': (226, 112, 18), 'body_style': 'donkey', 'car_name': 'Toni', 'font_size': 40}, 'frame_skip': 1, 'host': 'localhost', 'level': 'sparkfun_avc', 'log_level': 20, 'max_cte': 16, 'port': 9091, 'start_delay': 5.0}, 'min_throttle': -0.2, 'steer': 0.3} ```
Davlan/xlm-roberta-base-finetuned-swahili
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "XLMRobertaForMaskedLM" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
40
null
--- language: en thumbnail: http://www.huggingtweets.com/centraldamiku/1654366478559/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1532142310741495808/VWMuTyjo_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Central da Miku</div> <div style="text-align: center; font-size: 14px;">@centraldamiku</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Central da Miku. | Data | Central da Miku | | --- | --- | | Tweets downloaded | 3242 | | Retweets | 348 | | Short tweets | 801 | | Tweets kept | 2093 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/m8jk5mo9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @centraldamiku's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/rp6i3tpo) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/rp6i3tpo/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/centraldamiku') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Davlan/xlm-roberta-base-finetuned-wolof
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "XLMRobertaForMaskedLM" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - accuracy - f1 language: - en widget: - text: "Broadcom agreed to acquire cloud computing company VMware in a $61 billion (€57bn) cash-and stock deal, massively diversifying the chipmaker’s business and almost tripling its software-related revenue to about 45% of its total sales. By the numbers: VMware shareholders will receive either $142.50 in cash or 0.2520 of a Broadcom share for each VMware stock. Broadcom will also assume $8 billion of VMware's net debt." - text: "Canadian Natural Resources Minister Jonathan Wilkinson told Bloomberg that the country could start supplying Europe with liquefied natural gas (LNG) in as soon as three years by converting an existing LNG import facility on Canada’s Atlantic coast into an export terminal. Bottom line: Wilkinson said what Canada cares about is that the new LNG facility uses a low-emission process for the gas and is capable of transitioning to exporting hydrogen later on." - text: "Google is being investigated by the UK’s antitrust watchdog for its dominance in the \"ad tech stack,\" the set of services that facilitate the sale of online advertising space between advertisers and sellers. Google has strong positions at various levels of the ad tech stack and charges fees to both publishers and advertisers. A step back: UK Competition and Markets Authority has also been investigating whether Google and Meta colluded over ads, probing into the advertising agreement between the two companies, codenamed Jedi Blue." - text: "Shares in Twitter closed 6.35% up after an SEC 13D filing revealed that Elon Musk pledged to put up an additional $6.25 billion of his own wealth to fund the $44 billion takeover deal, lifting the total to $33.5 billion from an initial $27.25 billion. In other news: Former Twitter CEO Jack Dorsey announced he's stepping down, but would stay on Twitter’s board \\“until his term expires at the 2022 meeting of stockholders.\"" model-index: - name: bert-keyword-discriminator results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-keyword-discriminator This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1310 - Precision: 0.8522 - Recall: 0.8868 - Accuracy: 0.9732 - F1: 0.8692 - Ent/precision: 0.8874 - Ent/accuracy: 0.9246 - Ent/f1: 0.9056 - Con/precision: 0.8011 - Con/accuracy: 0.8320 - Con/f1: 0.8163 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 | Ent/precision | Ent/accuracy | Ent/f1 | Con/precision | Con/accuracy | Con/f1 | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:------:|:-------------:|:------------:|:------:|:-------------:|:------------:|:------:| | 0.1744 | 1.0 | 1875 | 0.1261 | 0.7176 | 0.7710 | 0.9494 | 0.7433 | 0.7586 | 0.8503 | 0.8018 | 0.6514 | 0.6561 | 0.6537 | | 0.1261 | 2.0 | 3750 | 0.1041 | 0.7742 | 0.8057 | 0.9600 | 0.7896 | 0.8083 | 0.8816 | 0.8433 | 0.7185 | 0.6957 | 0.7070 | | 0.0878 | 3.0 | 5625 | 0.0979 | 0.8176 | 0.8140 | 0.9655 | 0.8158 | 0.8518 | 0.8789 | 0.8651 | 0.7634 | 0.7199 | 0.7410 | | 0.0625 | 4.0 | 7500 | 0.0976 | 0.8228 | 0.8643 | 0.9696 | 0.8430 | 0.8515 | 0.9182 | 0.8836 | 0.7784 | 0.7862 | 0.7823 | | 0.0456 | 5.0 | 9375 | 0.1047 | 0.8304 | 0.8758 | 0.9704 | 0.8525 | 0.8758 | 0.9189 | 0.8968 | 0.7655 | 0.8133 | 0.7887 | | 0.0342 | 6.0 | 11250 | 0.1207 | 0.8363 | 0.8887 | 0.9719 | 0.8617 | 0.8719 | 0.9274 | 0.8988 | 0.7846 | 0.8327 | 0.8080 | | 0.0256 | 7.0 | 13125 | 0.1241 | 0.848 | 0.8892 | 0.9731 | 0.8681 | 0.8791 | 0.9299 | 0.9038 | 0.8019 | 0.8302 | 0.8158 | | 0.0205 | 8.0 | 15000 | 0.1310 | 0.8522 | 0.8868 | 0.9732 | 0.8692 | 0.8874 | 0.9246 | 0.9056 | 0.8011 | 0.8320 | 0.8163 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
Davlan/xlm-roberta-base-ner-hrl
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "XLMRobertaForTokenClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
760
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="kingabzpro/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
DeadBeast/emoBERTTamil
[ "pytorch", "tensorboard", "bert", "text-classification", "dataset:tamilmixsentiment", "transformers", "generated_from_trainer", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
35
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 182.82 +/- 79.11 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Declan/ChicagoTribune_model_v2
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - stackexchange_xml - code_search_net --- # ITESM/sentece-embeddings-BETO This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('ITESM/sentece-embeddings-BETO') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('ITESM/sentece-embeddings-BETO') model = AutoModel.from_pretrained('ITESM/sentece-embeddings-BETO') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ITESM/sentece-embeddings-BETO) ## Training The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 16 with parameters: ``` {'batch_size': 100} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 1, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 2, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Declan/WallStreetJournal_model_v5
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
2022-06-05T13:46:01Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 274.72 +/- 17.63 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DeepPavlov/rubert-base-cased-conversational
[ "pytorch", "jax", "bert", "feature-extraction", "ru", "transformers", "has_space" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
17,362
null
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: layoutlmv2-er-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2-er-ner This model is a fine-tuned version of [renjithks/layoutlmv2-cord-ner](https://huggingface.co/renjithks/layoutlmv2-cord-ner) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1217 - Precision: 0.7810 - Recall: 0.8085 - F1: 0.7945 - Accuracy: 0.9747 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 41 | 0.5441 | 0.0 | 0.0 | 0.0 | 0.8851 | | No log | 2.0 | 82 | 0.4660 | 0.1019 | 0.0732 | 0.0852 | 0.8690 | | No log | 3.0 | 123 | 0.2506 | 0.4404 | 0.4828 | 0.4606 | 0.9240 | | No log | 4.0 | 164 | 0.1725 | 0.6120 | 0.6076 | 0.6098 | 0.9529 | | No log | 5.0 | 205 | 0.1387 | 0.7204 | 0.7245 | 0.7225 | 0.9671 | | No log | 6.0 | 246 | 0.1237 | 0.7742 | 0.7747 | 0.7745 | 0.9722 | | No log | 7.0 | 287 | 0.1231 | 0.7619 | 0.7554 | 0.7586 | 0.9697 | | No log | 8.0 | 328 | 0.1199 | 0.7994 | 0.7719 | 0.7854 | 0.9738 | | No log | 9.0 | 369 | 0.1197 | 0.7937 | 0.8113 | 0.8024 | 0.9741 | | No log | 10.0 | 410 | 0.1284 | 0.7581 | 0.7597 | 0.7589 | 0.9690 | | No log | 11.0 | 451 | 0.1172 | 0.7792 | 0.7848 | 0.7820 | 0.9738 | | No log | 12.0 | 492 | 0.1192 | 0.7913 | 0.7970 | 0.7941 | 0.9743 | | 0.1858 | 13.0 | 533 | 0.1175 | 0.7960 | 0.8006 | 0.7983 | 0.9753 | | 0.1858 | 14.0 | 574 | 0.1184 | 0.7724 | 0.8034 | 0.7876 | 0.9740 | | 0.1858 | 15.0 | 615 | 0.1171 | 0.7882 | 0.8142 | 0.8010 | 0.9756 | | 0.1858 | 16.0 | 656 | 0.1195 | 0.7829 | 0.8070 | 0.7948 | 0.9745 | | 0.1858 | 17.0 | 697 | 0.1209 | 0.7810 | 0.8006 | 0.7906 | 0.9743 | | 0.1858 | 18.0 | 738 | 0.1241 | 0.7806 | 0.7963 | 0.7884 | 0.9740 | | 0.1858 | 19.0 | 779 | 0.1222 | 0.7755 | 0.8027 | 0.7889 | 0.9742 | | 0.1858 | 20.0 | 820 | 0.1217 | 0.7810 | 0.8085 | 0.7945 | 0.9747 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
DeepPavlov/rubert-base-cased
[ "pytorch", "jax", "bert", "feature-extraction", "ru", "arxiv:1905.07213", "transformers", "has_space" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
148,127
2022-06-05T15:54:51Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 269.55 +/- 17.65 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DeltaHub/adapter_t5-3b_qnli
[ "pytorch", "transformers" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: finetuning-cardiffnlp-sentiment-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-cardiffnlp-sentiment-model This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2685 - Accuracy: 0.9165 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
DemangeJeremy/4-sentiments-with-flaubert
[ "pytorch", "flaubert", "text-classification", "fr", "transformers", "sentiments", "french", "flaubert-large" ]
text-classification
{ "architectures": [ "FlaubertForSequenceClassification" ], "model_type": "flaubert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
226
null
--- library_name: keras tags: - 3D-image-classification --- ## Model description This model is a 3D convolutional neural network model trained to predict the presence of viral pneumonia in computer tomography scans. [Spaces link](https://huggingface.co/spaces/keras-io/3D_CNN_Pneumonia) [Keras Example Link](https://keras.io/examples/vision/3D_image_classification/) ## Training and evaluation data Subset of the [MosMedData: Chest CT Scans with COVID-19 Related Findings](https://www.medrxiv.org/content/10.1101/2020.05.20.20100362v1) ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'ExponentialDecay', 'config': {'initial_learning_rate': 0.0001, 'decay_steps': 100000, 'decay_rate': 0.96, 'staircase': True, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
Deniskin/emailer_medium_300
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2-p4k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-p4k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Tokenizers 0.12.1
Deniskin/essays_small_2000i
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_keras_callback model-index: - name: robingeibel/longformer-base-finetuned-big_patent results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # robingeibel/longformer-base-finetuned-big_patent This model is a fine-tuned version of [robingeibel/longformer-base-finetuned-big_patent](https://huggingface.co/robingeibel/longformer-base-finetuned-big_patent) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.1860 - Validation Loss: 1.0692 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 152946, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.1860 | 1.0692 | 0 | ### Framework versions - Transformers 4.19.4 - TensorFlow 2.8.2 - Datasets 2.2.2 - Tokenizers 0.12.1
Denver/distilbert-base-uncased-finetuned-squad
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_keras_callback model-index: - name: reqscibert-tapt-epoch49 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # reqscibert-tapt-epoch49 This model is a fine-tuned version of [allenai/scibert_scivocab_cased](https://huggingface.co/allenai/scibert_scivocab_cased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 34950, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.19.2 - TensorFlow 2.8.2 - Datasets 2.2.2 - Tokenizers 0.12.1
DeskDown/MarianMixFT_en-fil
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
2022-06-05T18:10:36Z
--- tags: - generated_from_keras_callback model-index: - name: reqscibert-tapt-epoch20 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # reqscibert-tapt-epoch20 This model is a fine-tuned version of [allenai/scibert_scivocab_cased](https://huggingface.co/allenai/scibert_scivocab_cased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 34950, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.19.2 - TensorFlow 2.8.2 - Datasets 2.2.2 - Tokenizers 0.12.1
DeskDown/MarianMixFT_en-my
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- library_name: stable-baselines3 tags: - donkey-minimonaco-track-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: TQC results: - metrics: - type: mean_reward value: 386.49 +/- 0.77 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: donkey-minimonaco-track-v0 type: donkey-minimonaco-track-v0 --- # **TQC** Agent playing **donkey-minimonaco-track-v0** This is a trained model of a **TQC** agent playing **donkey-minimonaco-track-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Autoencoder: https://github.com/araffin/aae-train-donkeycar branch: `feat/race_june` <br/> Gym env: https://github.com/araffin/gym-donkeycar-1 branch: `feat/race_june` <br/> RL Zoo branch: `feat/gym-donkeycar` **Pretrained autoencoder** can be downloaded here: https://github.com/araffin/aae-train-donkeycar/releases/download/live-twitch-2/ae-32_monaco.pkl ``` export AE_PATH=/path/to/ae-32_monaco.pkl # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo tqc --env donkey-minimonaco-track-v0 -orga araffin -f logs/ python enjoy.py --algo tqc --env donkey-minimonaco-track-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo tqc --env donkey-minimonaco-track-v0 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo tqc --env donkey-minimonaco-track-v0 -f logs/ -orga araffin ``` ## Hyperparameters ```python OrderedDict([('batch_size', 256), ('buffer_size', 200000), ('callback', [{'rl_zoo3.callbacks.ParallelTrainCallback': {'gradient_steps': 200}}, 'rl_zoo3.callbacks.LapTimeCallback']), ('ent_coef', 'auto'), ('env_wrapper', [{'gym.wrappers.time_limit.TimeLimit': {'max_episode_steps': 10000}}, 'ae.wrapper.AutoencoderWrapper', {'rl_zoo3.wrappers.HistoryWrapper': {'horizon': 2}}]), ('gamma', 0.99), ('gradient_steps', 256), ('learning_rate', 0.00073), ('learning_starts', 500), ('n_timesteps', 2000000.0), ('normalize', "{'norm_obs': True, 'norm_reward': False}"), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(log_std_init=-3, net_arch=[256, 256], n_critics=2, ' 'use_expln=True)'), ('sde_sample_freq', 16), ('tau', 0.02), ('train_freq', 200), ('use_sde', True), ('use_sde_at_warmup', True), ('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})]) ``` # Environment Arguments ```python {'conf': {'cam_resolution': (120, 160, 3), 'car_config': {'body_rgb': (226, 112, 18), 'body_style': 'donkey', 'car_name': 'Toni', 'font_size': 40}, 'frame_skip': 1, 'host': 'localhost', 'level': 'mini_monaco', 'log_level': 20, 'max_cte': 8, 'port': 9091, 'start_delay': 5.0}, 'min_throttle': -0.2, 'steer': 0.8} ```
DeskDown/MarianMix_en-zh_to_vi-ms-hi-ja
[ "pytorch", "tensorboard", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
# My Story model {'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1} Arthur goes to the beach. Arthur was at the beach. His parents got him a towel for the trip. He lay down and got out of the sand. Arthur put on his towel and went to the ocean. He felt very refreshed as he surfed and swam for a bit. Arthur goes to the beach. Arthur has always been scared to go to the beach. But his friends convinced him to go. Arthur decided to try it. He found the water to be really cold. He turned around and went back to the car. Arthur goes to the beach. Arthur was very lonely. He decided to go to the beach. He packed his bathing suit and towel. He got ready to go to the beach. Arthur arrived at the beach and relaxed on his chair. Arthur goes to the beach. Arthur loved to surf and was always looking for new places to surf. He decided to head to the beach with his friends. Arthur drove for hours to find the spot and found it. Arthur and his friends went in and made it their new place. Arthur and his friends spent all day playing in the sun. Arthur goes to the beach. Arthur really wanted to go to the beach. Arthur was afraid of the cold water. Arthur called a friend for a swim meetup. Arthur met up with his friend. Arthur had a fun time at the beach at the end of the day. {'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1.05} Arthur goes to the beach. Arthur loves to swim. He decides to go swimming at the beach. Arthur gets a towel and a water bottle. He swam all afternoon. At the end of the day, he was soaked! Arthur goes to the beach. Arthur always wanted to go to the beach. One day his friends told him he had to go. Arthur called the beach and made plans. The next morning he drove to the beach. Arthur had a great time at the beach that day! Arthur goes to the beach. Arthur was always bored with life. He had no idea where to go on vacation. Arthur decided to go to the beach. He packed up his bag and drove to the beach. Arthur found it so much fun that he left the city. Arthur goes to the beach. Arthur went to the beach with his friends. They decided to go swimming. Arthur thought it would be fun to jump in the water. He splashed around until the sun was shining in the sky. After the sun came up, Arthur swam out into the ocean. Arthur goes to the beach. Arthur was feeling lonely one day. He decided to go to the beach. He packed his bag and drove to the beach. He walked to the beach and looked for many people. The people were nice and he met a new friend. {'top_p': 0.9, 'top_k': 40, 'temperature': 0.8, 'repetition_penalty': 1.1} Arthur goes to the beach. Arthur is going to the beach. His family tells him not to go because they have been looking forward to it. He decides to go anyway. Arthur finds the beach very relaxing. He is glad he went to the beach. Arthur goes to the beach. Arthur had never been to the beach before. He decided to go one day. Arthur packed a bag of snacks for the trip. He made his way to the beach. When he got there, he found out it was very sunny. Arthur goes to the beach. Arthur was having a great time at the beach with his family. He was playing in the water when he saw an angry turtle. The turtle had attacked the boat that Arthur was on. Arthur ran away as fast as he could, hoping no one would see him. But then, a huge wave crashed against the shore! Arthur goes to the beach. Arthur is bored and decides he wants to go to the beach. He arrives at the beach and sets up his tent. He then sets up a chair and a picnic table for himself. Finally, he lays down and gets ready to go. Arthur has a great time at the beach at the end of the day! Arthur goes to the beach. Arthur always wanted to go to the beach. His friends told him he was too old to go. Finally his parents took him out of school and took him. He drove to the beach and got his sandals and towels ready. When Arthur went to the beach, he realized it was not as bad as he thought. {'top_p': 0.9, 'top_k': 40, 'temperature': 0.6, 'repetition_penalty': 1.15} Arthur goes to the beach. Arthur was going to go to the beach with his friends. He packed up his things and drove to the beach. When he got there, it was very crowded. Arthur had to wait a long time to get his sandals. Finally, he finally arrived at the beach and played in the water. Arthur goes to the beach. Arthur was very excited about going on a trip to the beach. He packed up his car and drove to the beach. When he arrived, he saw that it was very crowded. Arthur realized that he had forgotten his sunscreen! Arthur decided not to go to the beach. Arthur goes to the beach. Arthur was out on a date with his girlfriend. They went to the beach and had fun swimming in the water. Afterwards, they walked around the beach for awhile. After walking, they saw a beautiful sunset. Finally, they left the beach and went home. Arthur goes to the beach. Arthur was excited for his trip to the beach. He packed up his car and drove out to the beach. Once he got there, Arthur realized it was really hot outside. The air conditioning in his car was broken. Arthur decided to leave without going to the beach. Arthur goes to the beach. Arthur wanted to go to the beach. He got his friends together and they all went to the beach. They played in the sand for a while then swam in the water. Finally, Arthur was tired but still had fun. Arthur decided he would go back next summer. {'top_p': 0.9, 'top_k': 40, 'temperature': 0.4, 'repetition_penalty': 1.2} Arthur goes to the beach. Arthur is feeling very bored one day. He decides he needs something to do. He heads out to the beach and finds a spot. He plays in the sand for hours. Finally, he is happy that he no longer feels bored. Arthur goes to the beach. Arthur was going to go to the beach with his friends. He had never been before but he decided to try it. They all packed up their things and headed out. When they got there, Arthur realized that he forgot his sunscreen! Luckily, his friend brought him a bottle of water so he could use it. Arthur goes to the beach. Arthur had always wanted to go to the beach. He saved up his money for a week and finally went on vacation. On the day of his trip, he was so excited that he forgot all about work! He spent hours at the beach and even more when he got home. Afterwards, he decided he would never forget to pay attention to work again. Arthur goes to the beach. Arthur is feeling very tired one day. He decides he needs something to do. He calls his friend and asks him if he wants to go to the beach. His friend says yes. They spend the afternoon playing in the sand. Arthur goes to the beach. Arthur had always wanted to go to the beach. He saved up for a few months so he could take his trip. Finally, Arthur went to the beach and spent all day playing in the water. Afterwards, he was very tired but happy that he finally got to the beach. The next morning, he decided it would be best to go back home.
Devid/DialoGPT-small-Miku
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- language: - ru widget: - text: "<IN>Как нам все-таки сделать AGI?\n<OUT>" metrics: - loss: 3.3 - perplexity: 25.7528 --- Start from sberbank-ai/rugpt3medium_based_on_gpt2 and finetuning on AGIRussia chats (russian). On this moment - only 3 epoch (perplexity falls reasons) on progress...
DevsIA/Devs_IA
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: En-Af results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # En-Af This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-af](https://huggingface.co/Helsinki-NLP/opus-mt-en-af) on the None dataset. It achieves the following results on the evaluation set: Before training: - 'eval_bleu': 35.055184951449 - 'eval_loss': 2.225693941116333 After training: - Loss: 2.0057 - Bleu: 44.2309 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
Dibyaranjan/nl_image_search
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en thumbnail: http://www.huggingtweets.com/cz_binance/1664010956441/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1572269909513478146/dfyw817W_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">CZ 🔶 Binance</div> <div style="text-align: center; font-size: 14px;">@cz_binance</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from CZ 🔶 Binance. | Data | CZ 🔶 Binance | | --- | --- | | Tweets downloaded | 3246 | | Retweets | 149 | | Short tweets | 473 | | Tweets kept | 2624 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/19171g9o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cz_binance's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ngvvhd8) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ngvvhd8/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/cz_binance') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Dilmk2/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - victorlifan/autotrain-data-song_title_generate co2_eq_emissions: 11.013963276910237 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 939531516 - CO2 Emissions (in grams): 11.013963276910237 ## Validation Metrics - Loss: 1.1184396743774414 - Rouge1: 54.9539 - Rouge2: 40.7878 - RougeL: 54.8616 - RougeLsum: 54.8682 - Gen Len: 5.1429 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/victorlifan/autotrain-song_title_generate-939531516 ```
DimaOrekhov/cubert-method-name
[ "pytorch", "encoder-decoder", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: tiny-bert-sst2-distilled-model results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.838302752293578 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-bert-sst2-distilled-model This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.2592 - Accuracy: 0.8383 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5303 | 1.0 | 4210 | 1.2542 | 0.8222 | | 0.4503 | 2.0 | 8420 | 1.1260 | 0.8211 | | 0.3689 | 3.0 | 12630 | 1.2325 | 0.8234 | | 0.3122 | 4.0 | 16840 | 1.2533 | 0.8337 | | 0.2764 | 5.0 | 21050 | 1.2726 | 0.8337 | | 0.254 | 6.0 | 25260 | 1.2609 | 0.8337 | | 0.2358 | 7.0 | 29470 | 1.2592 | 0.8383 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.10.1+cu113 - Datasets 1.15.1 - Tokenizers 0.12.1
DingleyMaillotUrgell/homer-bot
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: mit tags: - generated_from_keras_callback model-index: - name: reqroberta-tapt-epoch20 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # reqroberta-tapt-epoch20 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 37100, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.19.2 - TensorFlow 2.8.2 - Datasets 2.2.2 - Tokenizers 0.12.1
DivyanshuSheth/T5-Seq2Seq-Final
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_keras_callback model-index: - name: reqroberta-tapt-epoch33 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # reqroberta-tapt-epoch33 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 37100, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.19.2 - TensorFlow 2.8.2 - Datasets 2.2.2 - Tokenizers 0.12.1
Dizoid/Lll
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-06-05T23:19:44Z
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: it datasets: - lmqg/qg_itquad pipeline_tag: text2text-generation tags: - question generation widget: - text: "<hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento." example_title: "Question Generation Example 1" - text: "L' individuazione del petrolio e lo sviluppo di nuovi giacimenti richiedeva in genere <hl> da cinque a dieci anni <hl> prima di una produzione significativa." example_title: "Question Generation Example 2" - text: "il <hl> Giappone <hl> è stato il paese più dipendente dal petrolio arabo." example_title: "Question Generation Example 3" model-index: - name: lmqg/mt5-small-itquad-qg results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_itquad type: default args: default metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 7.37 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 21.93 - name: METEOR (Question Generation) type: meteor_question_generation value: 17.57 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 80.8 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 56.79 - name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 87.66 - name: QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 87.57 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 87.76 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 61.6 - name: QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 61.48 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 61.73 - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer value: 81.63 - name: QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_recall_bertscore_question_answer_generation_gold_answer value: 82.28 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_precision_bertscore_question_answer_generation_gold_answer value: 81.04 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer value: 55.85 - name: QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_recall_moverscore_question_answer_generation_gold_answer value: 56.14 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_precision_moverscore_question_answer_generation_gold_answer value: 55.6 --- # Model Card of `lmqg/mt5-small-itquad-qg` This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for question generation task on the [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small) - **Language:** it - **Training data:** [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="it", model="lmqg/mt5-small-itquad-qg") # model prediction questions = model.generate_q(list_context="Dopo il 1971 , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.", list_answer="Dopo il 1971") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-small-itquad-qg") output = pipe("<hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-itquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_itquad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 80.8 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_1 | 22.78 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_2 | 14.93 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_3 | 10.34 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_4 | 7.37 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | METEOR | 17.57 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | MoverScore | 56.79 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | ROUGE_L | 21.93 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | - ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/mt5-small-itquad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_itquad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 87.66 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedF1Score (MoverScore) | 61.6 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedPrecision (BERTScore) | 87.76 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedPrecision (MoverScore) | 61.73 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedRecall (BERTScore) | 87.57 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedRecall (MoverScore) | 61.48 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/mt5-small-itquad-ae`](https://huggingface.co/lmqg/mt5-small-itquad-ae). [raw metric file](https://huggingface.co/lmqg/mt5-small-itquad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_itquad.default.lmqg_mt5-small-itquad-ae.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 81.63 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedF1Score (MoverScore) | 55.85 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedPrecision (BERTScore) | 81.04 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedPrecision (MoverScore) | 55.6 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedRecall (BERTScore) | 82.28 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedRecall (MoverScore) | 56.14 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_itquad - dataset_name: default - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: google/mt5-small - max_length: 512 - max_length_output: 32 - epoch: 15 - batch: 16 - lr: 0.0005 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.0 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-itquad-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
Dkwkk/Da
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_keras_callback model-index: - name: reqroberta-tapt-epoch43 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # reqroberta-tapt-epoch43 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 37100, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.19.2 - TensorFlow 2.8.2 - Datasets 2.2.2 - Tokenizers 0.12.1
Dmitriiserg/Pxd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_keras_callback model-index: - name: reqroberta-tapt-epoch50 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # reqroberta-tapt-epoch50 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 37100, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.19.2 - TensorFlow 2.8.2 - Datasets 2.2.2 - Tokenizers 0.12.1
Doiman/DialoGPT-medium-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-grammar-corruption-edits results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-grammar-corruption-edits This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
DongHyoungLee/distilbert-base-uncased-finetuned-cola
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="TinySuitStarfish/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Doohae/roberta
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="rushic24/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Doquey/DialoGPT-small-Michaelbot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2022-06-06T02:23:11Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer model-index: - name: layoutlmv2-finetuned-funsd-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2-finetuned-funsd-test This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.8.0+cu101 - Datasets 2.2.2 - Tokenizers 0.12.1
DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-12
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
2022-06-06T02:34:23Z
--- tags: - generated_from_trainer model-index: - name: bert-semaphore-prediction-w4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-semaphore-prediction-w4 This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0 - Datasets 2.2.2 - Tokenizers 0.12.1
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-100
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: apache-2.0 tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: mt5-small-sandbox1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-sandbox1 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 14.5875 - Rouge1: 0.0 - Rouge2: 0.0 - Rougel: 0.0 - Rougelsum: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.11.0 - Datasets 2.2.2 - Tokenizers 0.10.3
albert-xxlarge-v2
[ "pytorch", "tf", "safetensors", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
42,640
2022-06-06T04:54:18Z
--- tags: - conversational --- # Rick Dialog GPT Model Medium 12 # Trained on: # kaggle rick n morty Tv transcript
bert-base-cased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8,621,271
2022-06-06T05:13:42Z
Korean Dialect Translator: Standard > Gyeongsang - Used Data : AI hub 한국어 방언 발화(경상도) - Used Model : SKT-KoBART - https://github.com/SKT-AI/KoBART - Reference Code - https://github.com/seujung/KoBART-translation
bert-base-german-dbmdz-cased
[ "pytorch", "jax", "bert", "fill-mask", "de", "transformers", "license:mit", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,814
2022-06-06T05:23:56Z
Korean Dialect Translator: Standard > Jeju - Used Data : AI hub 한국어 방언 발화(제주도) - Used Model : SKT-KoBART - https://github.com/SKT-AI/KoBART - Reference Code - https://github.com/seujung/KoBART-translation
bert-base-german-dbmdz-uncased
[ "pytorch", "jax", "safetensors", "bert", "fill-mask", "de", "transformers", "license:mit", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
68,305
2022-06-06T05:27:20Z
Korean Dialect Translator: Jeju > Standard - Used Data : AI hub 한국어 방언 발화(제주도) - Used Model : SKT-KoBART - https://github.com/SKT-AI/KoBART - Reference Code - https://github.com/seujung/KoBART-translation
bert-base-multilingual-cased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "mn", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "th", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4,749,504
2022-06-06T05:30:05Z
Korean Dialect Translator: Gyeongsang > Standard - Used Data : AI hub 한국어 방언 발화(경상도) - Used Model : SKT-KoBART - https://github.com/SKT-AI/KoBART - Reference Code - https://github.com/seujung/KoBART-translation
bert-large-cased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
388,769
2022-06-06T06:07:19Z
--- library_name: stable-baselines3 tags: - MountainCar-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - metrics: - type: mean_reward value: -104.89 +/- 20.36 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: MountainCar-v0 type: MountainCar-v0 --- # **DQN** Agent playing **MountainCar-v0** This is a trained model of a **DQN** agent playing **MountainCar-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
bert-large-uncased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,058,496
2022-06-06T06:32:10Z
--- title: Poop emoji: 🥑 colorFrom: yellow colorTo: green sdk: static pinned: True license: apache-2.0 ---
ctrl
[ "pytorch", "tf", "ctrl", "en", "arxiv:1909.05858", "arxiv:1910.09700", "transformers", "license:bsd-3-clause", "has_space" ]
null
{ "architectures": null, "model_type": "ctrl", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
17,007
2022-06-06T06:49:17Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: ksabeh/roberta-base-attribute-correction-qa-attribute-correction-qa results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ksabeh/roberta-base-attribute-correction-qa-attribute-correction-qa This model is a fine-tuned version of [ksabeh/roberta-base-attribute-correction-qa](https://huggingface.co/ksabeh/roberta-base-attribute-correction-qa) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1296 - Validation Loss: 0.1091 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 36783, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1296 | 0.1091 | 0 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.6.3 - Datasets 2.1.0 - Tokenizers 0.12.1
distilbert-base-german-cased
[ "pytorch", "safetensors", "distilbert", "fill-mask", "de", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
43,667
2022-06-06T07:12:12Z
--- tags: - generated_from_trainer model-index: - name: wangchanberta-base-att-spm-uncased-finetuned-cosme results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wangchanberta-base-att-spm-uncased-finetuned-cosme This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9973 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1386 | 1.0 | 391 | 1.9939 | | 2.1301 | 2.0 | 782 | 1.9974 | | 2.1296 | 3.0 | 1173 | 2.0013 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.11.0+cu113 - Datasets 1.17.0 - Tokenizers 0.10.3
distilbert-base-multilingual-cased
[ "pytorch", "tf", "onnx", "safetensors", "distilbert", "fill-mask", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "mn", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "th", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo", "dataset:wikipedia", "arxiv:1910.01108", "arxiv:1910.09700", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8,339,633
2022-06-06T07:28:24Z
--- license: mit --- Model trained on IBMArgRank30k for 2 epochs with a learning rate of 3e-5 (optimised via grid search) in a similar way as in Lauscher et al. 2020 (see below). The original model was Tensorflow-based. This model corresponds to a reimplementation with Transformers & PyTorch. ``` @inproceedings{lauscher-etal-2020-rhetoric, title = "Rhetoric, Logic, and Dialectic: Advancing Theory-based Argument Quality Assessment in Natural Language Processing", author = "Lauscher, Anne and Ng, Lily and Napoles, Courtney and Tetreault, Joel", booktitle = "Proceedings of the 28th International Conference on Computational Linguistics", month = dec, year = "2020", address = "Barcelona, Spain (Online)", publisher = "International Committee on Computational Linguistics", url = "https://aclanthology.org/2020.coling-main.402", doi = "10.18653/v1/2020.coling-main.402", pages = "4563--4574", abstract = "Though preceding work in computational argument quality (AQ) mostly focuses on assessing overall AQ, researchers agree that writers would benefit from feedback targeting individual dimensions of argumentation theory. However, a large-scale theory-based corpus and corresponding computational models are missing. We fill this gap by conducting an extensive analysis covering three diverse domains of online argumentative writing and presenting GAQCorpus: the first large-scale English multi-domain (community Q{\&}A forums, debate forums, review forums) corpus annotated with theory-based AQ scores. We then propose the first computational approaches to theory-based assessment, which can serve as strong baselines for future work. We demonstrate the feasibility of large-scale AQ annotation, show that exploiting relations between dimensions yields performance improvements, and explore the synergies between theory-based prediction and practical AQ assessment.", } ```
distilbert-base-uncased-distilled-squad
[ "pytorch", "tf", "tflite", "coreml", "safetensors", "distilbert", "question-answering", "en", "dataset:squad", "arxiv:1910.01108", "arxiv:1910.09700", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
question-answering
{ "architectures": [ "DistilBertForQuestionAnswering" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
100,097
2022-06-06T07:29:23Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-paraphrase-finetuned-xsum-v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-paraphrase-finetuned-xsum-v3 This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3377 - Rouge1: 99.9461 - Rouge2: 72.6619 - Rougel: 99.9461 - Rougelsum: 99.9461 - Gen Len: 9.0396 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 139 | 0.3653 | 96.4972 | 70.8271 | 96.5252 | 96.5085 | 9.7158 | | No log | 2.0 | 278 | 0.6624 | 98.3228 | 72.2829 | 98.2598 | 98.2519 | 9.0612 | | No log | 3.0 | 417 | 0.2880 | 98.2415 | 72.36 | 98.249 | 98.2271 | 9.4496 | | 0.5019 | 4.0 | 556 | 0.4188 | 98.1123 | 70.8536 | 98.0746 | 98.0465 | 9.4065 | | 0.5019 | 5.0 | 695 | 0.3718 | 98.8882 | 72.6619 | 98.8997 | 98.8882 | 10.7842 | | 0.5019 | 6.0 | 834 | 0.4442 | 99.6076 | 72.6619 | 99.6076 | 99.598 | 9.0647 | | 0.5019 | 7.0 | 973 | 0.2681 | 99.6076 | 72.6619 | 99.598 | 99.598 | 9.1403 | | 0.2751 | 8.0 | 1112 | 0.3577 | 99.2479 | 72.6619 | 99.2536 | 99.2383 | 9.0612 | | 0.2751 | 9.0 | 1251 | 0.2481 | 98.8785 | 72.6394 | 98.8882 | 98.8882 | 9.7914 | | 0.2751 | 10.0 | 1390 | 0.2339 | 99.6076 | 72.6619 | 99.6076 | 99.6076 | 9.1942 | | 0.2051 | 11.0 | 1529 | 0.2472 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.2338 | | 0.2051 | 12.0 | 1668 | 0.3948 | 99.6076 | 72.6619 | 99.598 | 99.598 | 9.0468 | | 0.2051 | 13.0 | 1807 | 0.4756 | 99.6076 | 72.6619 | 99.6076 | 99.6076 | 9.0576 | | 0.2051 | 14.0 | 1946 | 0.3543 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.0396 | | 0.1544 | 15.0 | 2085 | 0.2828 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.0576 | | 0.1544 | 16.0 | 2224 | 0.2456 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.1079 | | 0.1544 | 17.0 | 2363 | 0.2227 | 99.9461 | 72.6394 | 99.9461 | 99.9461 | 9.5072 | | 0.1285 | 18.0 | 2502 | 0.3490 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.0396 | | 0.1285 | 19.0 | 2641 | 0.3736 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.0396 | | 0.1285 | 20.0 | 2780 | 0.3377 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.0396 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
distilbert-base-uncased-finetuned-sst-2-english
[ "pytorch", "tf", "rust", "safetensors", "distilbert", "text-classification", "en", "dataset:sst2", "dataset:glue", "arxiv:1910.01108", "doi:10.57967/hf/0181", "transformers", "license:apache-2.0", "model-index", "has_space" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3,060,704
2022-06-06T07:37:17Z
--- language: ["ru"] pipeline_tag: feature-extraction tags: - feature-extraction - sentence-similarity license: apache-2.0 ---
distilbert-base-uncased
[ "pytorch", "tf", "jax", "rust", "safetensors", "distilbert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1910.01108", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10,887,471
2022-06-06T07:37:38Z
--- language: id datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Indonesian by Ridho results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice id type: common_voice args: id metrics: - name: Test WER type: wer value: 21.07 --- Indonesia XLRS model
t5-3b
[ "pytorch", "tf", "t5", "text2text-generation", "en", "fr", "ro", "de", "multilingual", "dataset:c4", "arxiv:1805.12471", "arxiv:1708.00055", "arxiv:1704.05426", "arxiv:1606.05250", "arxiv:1808.09121", "arxiv:1810.12885", "arxiv:1905.10044", "arxiv:1910.09700", "transformers", "summarization", "translation", "license:apache-2.0", "autotrain_compatible", "has_space" ]
translation
{ "architectures": [ "T5WithLMHeadModel" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
103,474
2022-06-06T09:31:39Z
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: ko datasets: - lmqg/qg_koquad pipeline_tag: text2text-generation tags: - question generation widget: - text: "1990년 영화 《 <hl> 남부군 <hl> 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다." example_title: "Question Generation Example 1" - text: "백신이 없기때문에 예방책은 <hl> 살충제 <hl> 를 사용하면서 서식 장소(찻찬 받침, 배수로, 고인 물의 열린 저장소, 버려진 타이어 등)의 수를 줄임으로써 매개체를 통제할 수 있다." example_title: "Question Generation Example 2" - text: "<hl> 원테이크 촬영 <hl> 이기 때문에 한 사람이 실수를 하면 처음부터 다시 찍어야 하는 상황이 발생한다." example_title: "Question Generation Example 3" model-index: - name: lmqg/mt5-small-koquad-qg results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_koquad type: default args: default metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 10.57 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 25.64 - name: METEOR (Question Generation) type: meteor_question_generation value: 27.52 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 82.89 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 82.49 - name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 87.52 - name: QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 87.49 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 87.57 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 85.15 - name: QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 85.09 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 85.23 - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer value: 80.52 - name: QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_recall_bertscore_question_answer_generation_gold_answer value: 83.8 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_precision_bertscore_question_answer_generation_gold_answer value: 77.56 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer value: 82.95 - name: QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_recall_moverscore_question_answer_generation_gold_answer value: 87.02 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_precision_moverscore_question_answer_generation_gold_answer value: 79.39 --- # Model Card of `lmqg/mt5-small-koquad-qg` This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for question generation task on the [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small) - **Language:** ko - **Training data:** [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="ko", model="lmqg/mt5-small-koquad-qg") # model prediction questions = model.generate_q(list_context="1990년 영화 《 남부군 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다.", list_answer="남부군") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-small-koquad-qg") output = pipe("1990년 영화 《 <hl> 남부군 <hl> 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-koquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_koquad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 82.89 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_1 | 25.31 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_2 | 18.59 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_3 | 13.98 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_4 | 10.57 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | METEOR | 27.52 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | MoverScore | 82.49 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | ROUGE_L | 25.64 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | - ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/mt5-small-koquad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_koquad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 87.52 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedF1Score (MoverScore) | 85.15 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedPrecision (BERTScore) | 87.57 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedPrecision (MoverScore) | 85.23 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedRecall (BERTScore) | 87.49 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedRecall (MoverScore) | 85.09 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/mt5-small-koquad-ae`](https://huggingface.co/lmqg/mt5-small-koquad-ae). [raw metric file](https://huggingface.co/lmqg/mt5-small-koquad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_koquad.default.lmqg_mt5-small-koquad-ae.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 80.52 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedF1Score (MoverScore) | 82.95 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedPrecision (BERTScore) | 77.56 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedPrecision (MoverScore) | 79.39 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedRecall (BERTScore) | 83.8 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | QAAlignedRecall (MoverScore) | 87.02 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_koquad - dataset_name: default - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: google/mt5-small - max_length: 512 - max_length_output: 32 - epoch: 7 - batch: 16 - lr: 0.001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-koquad-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
2umm3r/distilbert-base-uncased-finetuned-cola
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
36
2022-06-06T13:50:13Z
--- tags: - Token Classification - spacy - SpanCategorizer - grammar_checker - essay_checker language: - en license: cc-by-sa-3.0 --- # Essay Grammar Checker Essay Grammar Checker trained on [Russian Error-Annotated Learner English Corpus](https://realec.org). ## Training information The checker consists of 6 pipelines each trained on specific error types. Error Categories used for pipeline mapping: ``` "spelling":{"Spelling", "Capitalisation"}, "punctuation": {"Punctuation"}, "articles": {"Articles"}, "vocabulary": {"lex_item_choice", "lex_part_choice", 'Category_confusion','Formational_affixes'}, "grammar_major": {'Tense_choice','Prepositions','Agreement_errors', 'Redundant_comp'}, "grammar_minor": {'Word_order','Noun_number', 'Numerals','Verb_pattern', 'Determiners'} ``` [Detailed information](https://github.com/upunaprosk/grammar_checker) [Example usage in Colab](https://github.com/upunaprosk/grammar_checker/blob/master/grammar_checker_example_usage.ipynb)
AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1
[ "pytorch", "xlm-roberta", "sentence-transformers", "feature-extraction", "sentence-similarity", "transformers" ]
sentence-similarity
{ "architectures": [ "XLMRobertaModel" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
73
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="vjeansel/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
AKulk/wav2vec2-base-timit-demo-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-06-06T18:28:10Z
--- language: - en tags: - text-classification - zero-shot-classification license: mit metrics: - accuracy datasets: - multi_nli - anli - fever - lingnli - alisawuffles/WANLI pipeline_tag: zero-shot-classification #- text-classification #widget: #- text: "I first thought that I really liked the movie, but upon second thought it was actually disappointing. [SEP] The movie was not good." model-index: # info: https://github.com/huggingface/hub-docs/blame/main/modelcard.md - name: DeBERTa-v3-large-mnli-fever-anli-ling-wanli results: - task: type: text-classification # Required. Example: automatic-speech-recognition name: Natural Language Inference # Optional. Example: Speech Recognition dataset: type: multi_nli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets name: MultiNLI-matched # Required. A pretty name for the dataset. Example: Common Voice (French) split: validation_matched # Optional. Example: test metrics: - type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics value: 0,912 # Required. Example: 20.90 #name: # Optional. Example: Test WER verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported). - task: type: text-classification # Required. Example: automatic-speech-recognition name: Natural Language Inference # Optional. Example: Speech Recognition dataset: type: multi_nli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets name: MultiNLI-mismatched # Required. A pretty name for the dataset. Example: Common Voice (French) split: validation_mismatched # Optional. Example: test metrics: - type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics value: 0,908 # Required. Example: 20.90 #name: # Optional. Example: Test WER verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported). - task: type: text-classification # Required. Example: automatic-speech-recognition name: Natural Language Inference # Optional. Example: Speech Recognition dataset: type: anli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets name: ANLI-all # Required. A pretty name for the dataset. Example: Common Voice (French) split: test_r1+test_r2+test_r3 # Optional. Example: test metrics: - type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics value: 0,702 # Required. Example: 20.90 #name: # Optional. Example: Test WER verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported). - task: type: text-classification # Required. Example: automatic-speech-recognition name: Natural Language Inference # Optional. Example: Speech Recognition dataset: type: anli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets name: ANLI-r3 # Required. A pretty name for the dataset. Example: Common Voice (French) split: test_r3 # Optional. Example: test metrics: - type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics value: 0,64 # Required. Example: 20.90 #name: # Optional. Example: Test WER verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported). - task: type: text-classification # Required. Example: automatic-speech-recognition name: Natural Language Inference # Optional. Example: Speech Recognition dataset: type: alisawuffles/WANLI # Required. Example: common_voice. Use dataset id from https://hf.co/datasets name: WANLI # Required. A pretty name for the dataset. Example: Common Voice (French) split: test # Optional. Example: test metrics: - type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics value: 0,77 # Required. Example: 20.90 #name: # Optional. Example: Test WER verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported). - task: type: text-classification # Required. Example: automatic-speech-recognition name: Natural Language Inference # Optional. Example: Speech Recognition dataset: type: lingnli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets name: LingNLI # Required. A pretty name for the dataset. Example: Common Voice (French) split: test # Optional. Example: test metrics: - type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics value: 0,87 # Required. Example: 20.90 #name: # Optional. Example: Test WER verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported). --- # DeBERTa-v3-large-mnli-fever-anli-ling-wanli ## Model description This model was fine-tuned on the [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), Adversarial-NLI ([ANLI](https://huggingface.co/datasets/anli)), [LingNLI](https://arxiv.org/pdf/2104.07179.pdf) and [WANLI](https://huggingface.co/datasets/alisawuffles/WANLI) datasets, which comprise 885 242 NLI hypothesis-premise pairs. This model is the best performing NLI model on the Hugging Face Hub as of 06.06.22 and can be used for zero-shot classification. It significantly outperforms all other large models on the [ANLI benchmark](https://github.com/facebookresearch/anli). The foundation model is [DeBERTa-v3-large from Microsoft](https://huggingface.co/microsoft/deberta-v3-large). DeBERTa-v3 combines several recent innovations compared to classical Masked Language Models like BERT, RoBERTa etc., see the [paper](https://arxiv.org/abs/2111.09543) ### How to use the model #### Simple zero-shot classification pipeline ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli") sequence_to_classify = "Angela Merkel is a politician in Germany and leader of the CDU" candidate_labels = ["politics", "economy", "entertainment", "environment"] output = classifier(sequence_to_classify, candidate_labels, multi_label=False) print(output) ``` #### NLI use-case ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing." hypothesis = "The movie was not good." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "neutral", "contradiction"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data DeBERTa-v3-large-mnli-fever-anli-ling-wanli was trained on the [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), Adversarial-NLI ([ANLI](https://huggingface.co/datasets/anli)), [LingNLI](https://arxiv.org/pdf/2104.07179.pdf) and [WANLI](https://huggingface.co/datasets/alisawuffles/WANLI) datasets, which comprise 885 242 NLI hypothesis-premise pairs. Note that [SNLI](https://huggingface.co/datasets/snli) was explicitly excluded due to quality issues with the dataset. More data does not necessarily make for better NLI models. ### Training procedure DeBERTa-v3-large-mnli-fever-anli-ling-wanli was trained using the Hugging Face trainer with the following hyperparameters. Note that longer training with more epochs hurt performance in my tests (overfitting). ``` training_args = TrainingArguments( num_train_epochs=4, # total number of training epochs learning_rate=5e-06, per_device_train_batch_size=16, # batch size per device during training gradient_accumulation_steps=2, # doubles the effective batch_size to 32, while decreasing memory requirements per_device_eval_batch_size=64, # batch size for evaluation warmup_ratio=0.06, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using the test sets for MultiNLI, ANLI, LingNLI, WANLI and the dev set for Fever-NLI. The metric used is accuracy. The model achieves state-of-the-art performance on each dataset. Surprisingly, it outperforms the previous [state-of-the-art on ANLI](https://github.com/facebookresearch/anli) (ALBERT-XXL) by 8,3%. I assume that this is because ANLI was created to fool masked language models like RoBERTa (or ALBERT), while DeBERTa-v3 uses a better pre-training objective (RTD), disentangled attention and I fine-tuned it on higher quality NLI data. |Datasets|mnli_test_m|mnli_test_mm|anli_test|anli_test_r3|ling_test|wanli_test| | :---: | :---: | :---: | :---: | :---: | :---: | :---: | |Accuracy|0.912|0.908|0.702|0.64|0.87|0.77| |Speed (text/sec, A100 GPU)|696.0|697.0|488.0|425.0|828.0|980.0| ## Limitations and bias Please consult the original DeBERTa-v3 paper and literature on different NLI datasets for more information on the training data and potential biases. The model will reproduce statistical patterns in the training data. ## Citation If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k. ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ### Debugging and issues Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues.
ASCCCCCCCC/distilbert-base-uncased-finetuned-amazon_zh_20000
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
39
2022-06-06T20:48:41Z
--- library_name: stable-baselines3 tags: - RocketLander-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: TQC results: - metrics: - type: mean_reward value: -0.00 +/- 0.16 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: RocketLander-v0 type: RocketLander-v0 --- # **TQC** Agent playing **RocketLander-v0** This is a trained model of a **TQC** agent playing **RocketLander-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Gym env: https://github.com/sdsubhajitdas/Rocket_Lander_Gym ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo tqc --env RocketLander-v0 -orga araffin -f logs/ python enjoy.py --algo tqc --env RocketLander-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo tqc --env RocketLander-v0 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo tqc --env RocketLander-v0 -f logs/ -orga araffin ``` ## Hyperparameters ```python OrderedDict([('env_wrapper', [{'rl_zoo3.wrappers.FrameSkip': {'skip': 4}}, {'rl_zoo3.wrappers.HistoryWrapper': {'horizon': 2}}]), ('n_timesteps', 3000000.0), ('policy', 'MlpPolicy'), ('normalize', False)]) ```
AVSilva/bertimbau-large-fine-tuned-md
[ "pytorch", "bert", "fill-mask", "transformers", "generated_from_trainer", "license:mit", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - summarization - ar - encoder-decoder - arabert - Abstractive Summarization - generated_from_trainer datasets: - xlsum model-index: - name: arabert2arabert-finetuned-ar-xlsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # arabert2arabert-finetuned-ar-xlsum This model is a fine-tuned version of [](https://huggingface.co/) on the xlsum dataset. It achieves the following results on the evaluation set: - Loss: 5.1557 - Rouge-1: 25.3 - Rouge-2: 10.46 - Rouge-l: 22.12 - Gen Len: 20.0 - Bertscore: 71.98 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - num_epochs: 8 - label_smoothing_factor: 0.1 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
Abi9x/DiabloGPT-large-Axel
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
``` !pip install simpletransformers from simpletransformers.t5 import T5Model model = T5Model("mt5", "eunsour/en-ko-transliterator", use_cuda=False) print(model.predict(["transformer"])) print(model.predict(["attention"])) ```
AdapterHub/bert-base-uncased-pf-cosmos_qa
[ "bert", "en", "dataset:cosmos_qa", "arxiv:2104.08247", "adapter-transformers", "adapterhub:comsense/cosmosqa" ]
null
{ "architectures": null, "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- language: - th tags: - automatic-speech-recognition license: apache-2.0 datasets: - common_voice metrics: - wer - cer --- # Thai Wav2Vec2 with CommonVoice V8 (deepcut tokenizer) + language model This model trained with CommonVoice V8 dataset by increase data from CommonVoice V7 dataset that It was use in [airesearch/wav2vec2-large-xlsr-53-th](https://huggingface.co/airesearch/wav2vec2-large-xlsr-53-th). It was finetune [wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53). ## Model description - Technical report: [Thai Wav2Vec2.0 with CommonVoice V8](https://arxiv.org/abs/2208.04799) ## Datasets It is increase new data from The Common Voice V8 dataset to Common Voice V7 dataset or remove all data in Common Voice V7 dataset before split Common Voice V8 then add CommonVoice V7 dataset back to dataset. It use [ekapolc/Thai_commonvoice_split](https://github.com/ekapolc/Thai_commonvoice_split) script for split Common Voice dataset. ## Models This model was finetune [wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) model with Thai Common Voice V8 dataset and It use pre-tokenize with deepcut.tokenize. ## Evaluation **Test with CommonVoice V8 Testset** | Model | WER by newmm (%) | WER by deepcut (%) | CER | |-----------------------|------------------|--------------------|----------| | AIResearch.in.th and PyThaiNLP | 17.414503 | 11.923089 | 3.854153 | | wav2vec2 with deepcut | 16.354521 | 11.424476 | 3.684060 | | wav2vec2 with newmm | 16.698299 | 11.436941 | 3.737407 | | **wav2vec2 with deepcut + language model** | 12.630260 | 9.613886 | 3.292073 | | wav2vec2 with newmm + language model | 12.583706 | 9.598305 | 3.276610 | **Test with CommonVoice V7 Testset (same test by CV V7)** | Model | WER by newmm (%) | WER by deepcut (%) | CER | |-----------------------|------------------|--------------------|----------| | AIResearch.in.th and PyThaiNLP | 13.936698 | 9.347462 | 2.804787 | | wav2vec2 with deepcut | 12.776381 | 8.773006 | 2.628882 | | wav2vec2 with newmm | 12.750596 | 8.672616 | 2.623341 | | **wav2vec2 with deepcut + language model** | 9.940050 | 7.423313 | 2.344940 | | wav2vec2 with newmm + language model | 9.559724 | 7.339654 | 2.277071 | This is use same testset from [https://huggingface.co/airesearch/wav2vec2-large-xlsr-53-th](https://huggingface.co/airesearch/wav2vec2-large-xlsr-53-th). **Links:** - GitHub Dataset: [https://github.com/wannaphong/thai_commonvoice_dataset](https://github.com/wannaphong/thai_commonvoice_dataset) - Technical report: [Thai Wav2Vec2.0 with CommonVoice V8](https://arxiv.org/abs/2208.04799) ## BibTeX entry and citation info ``` @misc{phatthiyaphaibun2022thai, title={Thai Wav2Vec2.0 with CommonVoice V8}, author={Wannaphong Phatthiyaphaibun and Chompakorn Chaksangchaichot and Peerat Limkonchotiwat and Ekapol Chuangsuwanich and Sarana Nutanong}, year={2022}, eprint={2208.04799}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
AdapterHub/roberta-base-pf-conll2003_pos
[ "roberta", "en", "dataset:conll2003", "arxiv:2104.08247", "adapter-transformers", "token-classification", "adapterhub:pos/conll2003" ]
token-classification
{ "architectures": null, "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: ksabeh/bert-base-uncased-attribute-correction results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ksabeh/bert-base-uncased-attribute-correction This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0541 - Validation Loss: 0.0579 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 36848, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1513 | 0.0671 | 0 | | 0.0541 | 0.0579 | 1 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.6.4 - Datasets 2.1.0 - Tokenizers 0.12.1
AdapterHub/roberta-base-pf-copa
[ "roberta", "en", "arxiv:2104.08247", "adapter-transformers", "adapterhub:comsense/copa" ]
null
{ "architectures": null, "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- library_name: stable-baselines3 tags: - FrozenLake-v1 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 0.80 +/- 0.40 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1 type: FrozenLake-v1 --- # **PPO** Agent playing **FrozenLake-v1** This is a trained model of a **PPO** agent playing **FrozenLake-v1** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AdapterHub/roberta-base-pf-cosmos_qa
[ "roberta", "en", "dataset:cosmos_qa", "arxiv:2104.08247", "adapter-transformers", "adapterhub:comsense/cosmosqa" ]
null
{ "architectures": null, "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: apache-2.0 --- GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models **Paper**: [GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models](https://arxiv.org/abs/2112.10741) **Abstract**: *Diffusion models have recently been shown to generate high-quality synthetic images, especially when paired with a guidance technique to trade off diversity for fidelity. We explore diffusion models for the problem of text-conditional image synthesis and compare two different guidance strategies: CLIP guidance and classifier-free guidance. We find that the latter is preferred by human evaluators for both photorealism and caption similarity, and often produces photorealistic samples. Samples from a 3.5 billion parameter text-conditional diffusion model using classifier-free guidance are favored by human evaluators to those from DALL-E, even when the latter uses expensive CLIP reranking. Additionally, we find that our models can be fine-tuned to perform image inpainting, enabling powerful text-driven image editing.* ## Usage ```python # !pip install diffusers import torch from diffusers import DiffusionPipeline import PIL.Image model_id = "fusing/glide-base" # load model and scheduler pipeline = DiffusionPipeline.from_pretrained(model_id) # run inference (text-conditioned denoising + upscaling) img = pipeline("a crayon drawing of a corgi") # process image to PIL img = img.squeeze(0) img = ((img + 1)*127.5).round().clamp(0, 255).to(torch.uint8).cpu().numpy() image_pil = PIL.Image.fromarray(img) # save image image_pil.save("test.png") ``` ## Samples 1. ![sample_1](https://huggingface.co/datasets/anton-l/images/resolve/main/glide1.png) 2. ![sample_2](https://huggingface.co/datasets/anton-l/images/resolve/main/glide2.png) 3. ![sample_3](https://huggingface.co/datasets/anton-l/images/resolve/main/glide3.png)
AethiQs-Max/aethiqs-base_bertje-data_rotterdam-epochs_30-epoch_30
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
# My Story model {'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1} Arthur goes to the beach. Arthur is in his beach day. He decides to go to the beach. He gets out on the board. He puts on his swimsuit. He goes to the beach. Arthur goes to the beach. Arthur is walking on the beach. He notices the water has gone very dirty. He gets out of his sand. He realizes that he should buy some new sand. He heads back to shore. Arthur goes to the beach. Arthur always wanted to go to the beach. He always wished that he could go on the beach. One day he asked his dad for the beach trip. His dad agreed that he went to the beach. Arthur was so happy that he was going to the beach. Arthur goes to the beach. Arthur went to the beach last week. His wife thought that he was going to the beach. She asked him to stop by and take a look at the ocean. Arthur said he was going to the ocean. His wife was not happy. Arthur goes to the beach. Arthur goes to the beach. He needs some sand for his feet. He needs to get sand for his feet. Arthur gets sand from the sand beach. Arthur goes to the beach with sand in his feet. {'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1.05} Arthur goes to the beach. Arthur wanted to go to the beach with his friends. His friends met up at the beach. They found out that it was too expensive for them to do. They then decided to buy a ticket to go by themselves. Arthur was not happy that he didn't get to go on the beach. Arthur goes to the beach. Arthur is on vacation. He decides to go to the beach. He arrives at the beach. He spends his day swimming in the ocean. Arthur has a great time at the beach. Arthur goes to the beach. Arthur was playing in the sand. He decided he wanted to go to the beach. The sand turned into muddy water. Arthur put his feet on the sand. He went back home. Arthur goes to the beach. Arthur always loved to go to the beach. The ocean is Arthur's favorite place to go. He likes to eat on the beach. He wants to see his parents again this year. He takes his mom to the beach for his birthday. Arthur goes to the beach. Arthur has always wanted to go to the beach. However, he has not ever been to the beach. Finally, one day he decides to go to the beach. He finally takes a nice day in the beach. Arthur is happy that he decided to go to the beach. {'top_p': 0.9, 'top_k': 40, 'temperature': 0.8, 'repetition_penalty': 1.1} Arthur goes to the beach. Arthur was looking for a job. He decided to go to the beach. The ocean was a good place for him. He loved the sun and sand. Arthur went to the beach for work. Arthur goes to the beach. Arthur went to the beach for a day. He played in the water. He had a good time. He found out that it was really hot today. He put on sunscreen and went home. Arthur goes to the beach. Arthur wants to go to the beach. He wants to have fun with his friends. He arrives at the beach and goes swimming. He spends all day playing in the ocean. Arthur is happy that he spent time at the beach. Arthur goes to the beach. Arthur went to the beach with his family. His family wanted to go to the beach. Arthur got out of the water and realized he did not want to go. Arthur's family made fun of him because they want to go to the beach. Arthur was embarrassed by his family for being so indecisive. Arthur goes to the beach. Arthur has been watching his friends go to the beach. He was not happy with it though. His friend didn't tell him that he is going too. So Arthur had to leave. The two went to the beach together. {'top_p': 0.9, 'top_k': 40, 'temperature': 0.6, 'repetition_penalty': 1.15} Arthur goes to the beach. Arthur was on a vacation in California. He decided he wanted to go to the beach. When Arthur arrived at the beach, it was very crowded. Arthur realized that there were not many people at the beach. Arthur went home and cried about it. Arthur goes to the beach. Arthur is at the beach with his friends. He has been playing in the sand for hours. His friends tell him he needs to go home and get a tan. They all leave the beach together. Arthur feels sad that he hasn't gotten a tan. Arthur goes to the beach. Arthur was walking down the beach with his girlfriend. They had decided to go for a swim in the ocean. While Arthur and his girlfriend were swimming, an alligator appeared on the shore. He tried to swim back but it was too hot for him. Arthur had to leave the beach without his girlfriend. Arthur goes to the beach. Arthur was looking for a place to go to the beach. He went to the local store and bought some sunscreen. He put on his swimsuit and went out into the ocean. After he got out of the water, he felt very sunburned. Arthur had never been to the beach before so he decided to stay at home. Arthur goes to the beach. Arthur was a young boy who wanted to go to the beach. He went to the beach and laid on his towel. Arthur had been so tired from playing with his friends. He decided to leave the beach to go home. Arthur was exhausted but happy he had made it to the beach. {'top_p': 0.9, 'top_k': 40, 'temperature': 0.4, 'repetition_penalty': 1.2} Arthur goes to the beach. Arthur is going on a trip with his friends. He has been looking for days and finally finds it. They have packed up all their gear and are ready to go. Arthur gets in the car and drives off into the ocean. Arthur goes to the beach. Arthur is going to the beach with his friends. They decide that they want to go swimming in the ocean. When Arthur gets there, he realizes it's not a good day for him. He decides to stay at home and watch television instead. Arthur feels sad that he doesn't have fun at the beach. Arthur goes to the beach. Arthur is going to the beach with his friends. His friends want him to go swimming in the ocean. They all agree that he should go swimming. He agrees and they leave for the beach. Arthur spends the day at the beach. Arthur goes to the beach. Arthur is going on a trip with his friends. They are going to the beach. He has never been before so he doesn't know what to do. His friend tells him that he should go swimming. Arthur agrees and goes to the beach. Arthur goes to the beach. Arthur is going to the beach with his family. He has been waiting for this day all year long. His mother tells him that he needs to get out of the sand. They go to the beach and Arthur gets sand in his eyes. He doesn't want to leave the beach but he does anyway.
AidenGO/KDXF_Bert4MaskedLM
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-large-uncased-compacter-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-compacter-squad This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15.0 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
AigizK/wav2vec2-large-xls-r-300m-bashkir-cv7_no_lm
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: test_auto_protocol results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_auto_protocol This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.19.1 - Pytorch 1.6.0 - Datasets 2.2.1 - Tokenizers 0.12.1
Akashpb13/Swahili_xlsr
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "sw", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - amazon_reviews_multi metrics: - accuracy model-index: - name: roberta-base-bne-finetuned-amazon_reviews_multi results: - task: name: Text Classification type: text-classification dataset: name: amazon_reviews_multi type: amazon_reviews_multi args: es metrics: - name: Accuracy type: accuracy value: 0.9325 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-amazon_reviews_multi This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.2207 - Accuracy: 0.9325 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1937 | 1.0 | 1250 | 0.1811 | 0.9327 | | 0.1005 | 2.0 | 2500 | 0.2207 | 0.9325 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
Akashpb13/xlsr_hungarian_new
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "hu", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-large-compacter-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-compacter-squad This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15.0 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
Aleksandar/electra-srb-ner-setimes-lr
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-06-08T02:07:14Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-large-uncased-lora-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-lora-squad This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15.0 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
Aleksandar/electra-srb-ner-setimes
[ "pytorch", "electra", "token-classification", "transformers", "generated_from_trainer", "autotrain_compatible" ]
token-classification
{ "architectures": [ "ElectraForTokenClassification" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: wa2vec2-5epochs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wa2vec2-5epochs This model is a fine-tuned version of [lighteternal/wav2vec2-large-xlsr-53-greek](https://huggingface.co/lighteternal/wav2vec2-large-xlsr-53-greek) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3049 - Accuracy: 0.9282 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 454 | 0.7179 | 0.7599 | | 0.6962 | 2.0 | 908 | 0.3806 | 0.8911 | | 0.3776 | 3.0 | 1362 | 0.3299 | 0.9109 | | 0.2071 | 4.0 | 1816 | 0.3021 | 0.9257 | | 0.1262 | 5.0 | 2270 | 0.3049 | 0.9282 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.3.dev0 - Tokenizers 0.12.1
Aleksandar/electra-srb-ner
[ "pytorch", "safetensors", "electra", "token-classification", "dataset:wikiann", "transformers", "generated_from_trainer", "autotrain_compatible" ]
token-classification
{ "architectures": [ "ElectraForTokenClassification" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
null
# 这是一个测试模型 language: - "List of ISO 639-1 code for your language" - zh thumbnail: "url to a thumbnail used in social sharing" tags: - example - qbhy license: "any valid license identifier" datasets: - qbhy/dataset-example metrics: - metric1
Aleksandar/electra-srb-oscar
[ "pytorch", "electra", "fill-mask", "transformers", "generated_from_trainer", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "ElectraForMaskedLM" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- language: en thumbnail: http://www.huggingtweets.com/_pancagkes/1654655985301/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1525194520970899457/uqCAbAl__400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">carlala</div> <div style="text-align: center; font-size: 14px;">@_pancagkes</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from carlala. | Data | carlala | | --- | --- | | Tweets downloaded | 3096 | | Retweets | 2299 | | Short tweets | 253 | | Tweets kept | 544 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/w3ejvw24/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_pancagkes's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1e8xcsmm) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1e8xcsmm/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/_pancagkes') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Aleksandar1932/gpt2-country
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-google-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5155 - Wer: 0.3388 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.5822 | 1.0 | 500 | 2.4127 | 1.0 | | 0.9838 | 2.01 | 1000 | 0.5401 | 0.5363 | | 0.4308 | 3.01 | 1500 | 0.4380 | 0.4592 | | 0.3086 | 4.02 | 2000 | 0.4409 | 0.4503 | | 0.2324 | 5.02 | 2500 | 0.4148 | 0.4041 | | 0.202 | 6.02 | 3000 | 0.4214 | 0.3882 | | 0.1595 | 7.03 | 3500 | 0.4489 | 0.3875 | | 0.1383 | 8.03 | 4000 | 0.4225 | 0.3858 | | 0.1246 | 9.04 | 4500 | 0.4512 | 0.3846 | | 0.104 | 10.04 | 5000 | 0.4676 | 0.3875 | | 0.0949 | 11.04 | 5500 | 0.4389 | 0.3683 | | 0.0899 | 12.05 | 6000 | 0.4964 | 0.3803 | | 0.0854 | 13.05 | 6500 | 0.5397 | 0.3798 | | 0.0728 | 14.06 | 7000 | 0.4823 | 0.3666 | | 0.065 | 15.06 | 7500 | 0.5187 | 0.3648 | | 0.0573 | 16.06 | 8000 | 0.5378 | 0.3715 | | 0.0546 | 17.07 | 8500 | 0.5239 | 0.3705 | | 0.0573 | 18.07 | 9000 | 0.5094 | 0.3554 | | 0.0478 | 19.08 | 9500 | 0.5334 | 0.3657 | | 0.0673 | 20.08 | 10000 | 0.5300 | 0.3528 | | 0.0434 | 21.08 | 10500 | 0.5314 | 0.3528 | | 0.0363 | 22.09 | 11000 | 0.5540 | 0.3512 | | 0.0326 | 23.09 | 11500 | 0.5514 | 0.3510 | | 0.0332 | 24.1 | 12000 | 0.5439 | 0.3492 | | 0.0275 | 25.1 | 12500 | 0.5273 | 0.3432 | | 0.0267 | 26.1 | 13000 | 0.5068 | 0.3430 | | 0.0243 | 27.11 | 13500 | 0.5131 | 0.3388 | | 0.0228 | 28.11 | 14000 | 0.5247 | 0.3406 | | 0.0227 | 29.12 | 14500 | 0.5155 | 0.3388 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
Aleksandar1932/gpt2-spanish-classics
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: - zh inference: parameters: max_new_tokens: 250 repetition_penalty: 1.1 top_p: 0.9 do_sample: True license: apache-2.0 --- # Wenzhong2.0-GPT2-3.5B-chinese - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM) - Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/) ## 简介 Brief Introduction 基于悟道数据集预训练,善于处理NLG任务,目前最大的,中文版的GPT2。 Pretraining on Wudao Corpus, focused on handling NLG tasks, the current largest, Chinese GPT2. ## 模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 通用 General | 自然语言生成 NLG | 闻仲 Wenzhong | GPT2 | 3.5B | 中文 Chinese | ## 模型信息 Model Information 为了可以获得一个强大的单向语言模型,我们采用GPT模型结构,并且应用于中文语料上。类似于Wenzhong-GPT2-3.5B,这个模型拥有30层解码器和35亿参数,这比原本的GPT2-XL还要大。不同的是,我们把这个模型在悟道(300G版本)语料上进行预训练。据我们所知,它是目前最大的中文的GPT模型。 To obtain a powerful unidirectional language model, we adopt the GPT model structure and apply it to the Chinese corpus. Similar to Wenzhong-GPT2-3.5B, this model has 30 decoder layers and 3.5 billion parameters, which is larger than the original GPT2-XL. The difference is that we pre-trained this model on the Wudao (300G version) corpus. To the best of our knowledge, it is the largest Chinese GPT model currently available. ## 使用 Usage ### 加载模型 Loading Models ```python from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained('IDEA-CCNL/Wenzhong2.0-GPT2-3.5B-chinese') model = GPT2LMHeadModel.from_pretrained('IDEA-CCNL/Wenzhong2.0-GPT2-3.5B-chinese') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### 使用示例 Usage Examples ```python from transformers import pipeline, set_seed set_seed(55) generator = pipeline('text-generation', model='IDEA-CCNL/Wenzhong2.0-GPT2-3.5B-chinese') generator("北京位于", max_length=30, num_return_sequences=1) ``` ## 引用 Citation 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970): If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970): ```text @article{fengshenbang, author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen}, title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence}, journal = {CoRR}, volume = {abs/2209.02970}, year = {2022} } ``` 也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/): You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/): ```text @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
Aleksandra/distilbert-base-uncased-finetuned-squad
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
Access to model chuaziheng/hubert_1 is restricted and you are not in the authorized list. Visit https://huggingface.co/chuaziheng/hubert_1 to ask for access.
AlekseyKorshuk/comedy-scripts
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1350929535454359558/lWAfxbn4_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Vu Fewequ</div> <div style="text-align: center; font-size: 14px;">@vufewequ</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Vu Fewequ. | Data | Vu Fewequ | | --- | --- | | Tweets downloaded | 175 | | Retweets | 60 | | Short tweets | 5 | | Tweets kept | 110 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3d6nz5jt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vufewequ's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1psyqthq) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1psyqthq/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/vufewequ') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Alessandro/model_name
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-lora-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-lora-squad This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15.0 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
AlexDemon/Alex
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_keras_callback model-index: - name: my-deberta results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # my-deberta This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - TensorFlow 2.8.2 - Datasets 2.2.2 - Tokenizers 0.12.1
Amba/wav2vec2-large-xls-r-300m-tr-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - optimum datasets: - banking77 metrics: - accuracy model-index: - name: quantized-distilbert-banking77 results: - task: name: Text Classification type: text-classification dataset: name: banking77 type: banking77 metrics: - name: Accuracy type: accuracy value: 0.9244 --- # Quantized-distilbert-banking77 This model is a dynamically quantized version of [optimum/distilbert-base-uncased-finetuned-banking77](https://huggingface.co/optimum/distilbert-base-uncased-finetuned-banking77) on the `banking77` dataset. The model was created using the [dynamic-quantization](https://github.com/huggingface/workshops/tree/main/mlops-world) notebook from a workshop presented at MLOps World 2022. It achieves the following results on the evaluation set: **Accuracy** - Vanilla model: 92.5% - Quantized model: 92.44% > The quantized model achieves 99.93% accuracy of the FP32 model **Latency** Payload sequence length: 128 Instance type: AWS c6i.xlarge | latency | vanilla transformers | quantized optimum model | improvement | |---------|----------------------|-------------------------|-------------| | p95 | 63.24ms | 37.06ms | 1.71x | | avg | 62.87ms | 37.93ms | 1.66x | ## How to use ```python from optimum.onnxruntime import ORTModelForSequenceClassification from transformers import pipeline, AutoTokenizer model = ORTModelForSequenceClassification.from_pretrained("lewtun/quantized-distilbert-banking77") tokenizer = AutoTokenizer.from_pretrained("lewtun/quantized-distilbert-banking77") classifier = pipeline("text-classification", model=model, tokenizer=tokenizer) classifier("What is the exchange rate like on this app?") ```
aisoftware/Loquela
[ "onnx" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-06-08T09:47:32Z
--- language: da tags: - speech - xls_r - xls_r_pretrained - danish license: apache-2.0 --- ## XLS-R-300m-danish Continued pretraining of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for 120.000 steps on 141.000 hours of speech from Danish radio (DR P1 and Radio24Syv from 2005 to 2021). The model was pretrained on 16kHz audio using fairseq and should be fine-tuned to perform speech recognition. A fine-tuned version of this model for ASR can be found [here](https://huggingface.co/chcaa/xls-r-300m-danish-nst-cv9). The model was trained by [Lasse Hansen](https://github.com/HLasse) ([CHCAA](https://chcaa.io)) and [Alvenir](https://alvenir.ai) on the [UCloud](https:/cloud.sdu.dk) platform. Many thanks to the Royal Danish Library for providing access to the data.
AmirBialer/amirbialer-Classifier
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - 6th - generated_from_trainer metrics: - f1 - accuracy model-index: - name: ff_analysis_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ff_analysis_3 This model is a fine-tuned version of [zdreiosis/ff_analysis_2](https://huggingface.co/zdreiosis/ff_analysis_2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0060 - F1: 1.0 - Roc Auc: 1.0 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | No log | 1.02 | 50 | 0.0138 | 1.0 | 1.0 | 1.0 | | No log | 2.04 | 100 | 0.0132 | 0.9966 | 0.9966 | 0.9885 | | No log | 3.06 | 150 | 0.0097 | 1.0 | 1.0 | 1.0 | | No log | 4.08 | 200 | 0.0095 | 0.9966 | 0.9966 | 0.9885 | | No log | 5.1 | 250 | 0.0096 | 1.0 | 1.0 | 1.0 | | No log | 6.12 | 300 | 0.0079 | 1.0 | 1.0 | 1.0 | | No log | 7.14 | 350 | 0.0070 | 1.0 | 1.0 | 1.0 | | No log | 8.16 | 400 | 0.0069 | 1.0 | 1.0 | 1.0 | | No log | 9.18 | 450 | 0.0065 | 1.0 | 1.0 | 1.0 | | 0.012 | 10.2 | 500 | 0.0060 | 1.0 | 1.0 | 1.0 | | 0.012 | 11.22 | 550 | 0.0060 | 0.9966 | 0.9966 | 0.9885 | | 0.012 | 12.24 | 600 | 0.0054 | 1.0 | 1.0 | 1.0 | | 0.012 | 13.27 | 650 | 0.0049 | 1.0 | 1.0 | 1.0 | | 0.012 | 14.29 | 700 | 0.0048 | 1.0 | 1.0 | 1.0 | | 0.012 | 15.31 | 750 | 0.0046 | 1.0 | 1.0 | 1.0 | | 0.012 | 16.33 | 800 | 0.0042 | 1.0 | 1.0 | 1.0 | | 0.012 | 17.35 | 850 | 0.0042 | 1.0 | 1.0 | 1.0 | | 0.012 | 18.37 | 900 | 0.0040 | 1.0 | 1.0 | 1.0 | | 0.012 | 19.39 | 950 | 0.0040 | 1.0 | 1.0 | 1.0 | | 0.0046 | 20.41 | 1000 | 0.0038 | 1.0 | 1.0 | 1.0 | | 0.0046 | 21.43 | 1050 | 0.0037 | 1.0 | 1.0 | 1.0 | | 0.0046 | 22.45 | 1100 | 0.0039 | 1.0 | 1.0 | 1.0 | | 0.0046 | 23.47 | 1150 | 0.0038 | 1.0 | 1.0 | 1.0 | | 0.0046 | 24.49 | 1200 | 0.0035 | 1.0 | 1.0 | 1.0 | | 0.0046 | 25.51 | 1250 | 0.0037 | 1.0 | 1.0 | 1.0 | | 0.0046 | 26.53 | 1300 | 0.0034 | 1.0 | 1.0 | 1.0 | | 0.0046 | 27.55 | 1350 | 0.0035 | 1.0 | 1.0 | 1.0 | | 0.0046 | 28.57 | 1400 | 0.0034 | 1.0 | 1.0 | 1.0 | | 0.0046 | 29.59 | 1450 | 0.0035 | 1.0 | 1.0 | 1.0 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.10.3
AmirHussein/test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
Domain adaptation is the process of fine-tuning pre-trained language models (PLMs) on domain-specific datasets to produce predictions that are better suited to the new datasets. Here, we re-train the BERT-base-uncased model on an unlabelled COVID-19 fake news dataset (Constraint@AAAI2021) using the masked language modeling (MLM) objective, where 15% of input text is masked, and the model is expected to predict the masked tokens.
AmirServi/MyModel
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: silviacamplani/distilbert-uncase-finetuned-ai-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # silviacamplani/distilbert-uncase-finetuned-ai-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.5704 - Validation Loss: 2.5380 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.2918 | 3.0479 | 0 | | 2.8526 | 2.6902 | 1 | | 2.5704 | 2.5380 | 2 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.6.4 - Datasets 2.1.0 - Tokenizers 0.12.1
Amirosein/distilbert_v1
[ "pytorch", "distilbert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
Just a simple example Central definitions Look at other model cards
Amirosein/roberta
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - cnn_dailymail metrics: - rouge model-index: - name: t5-small-finetuned-cnndm-samsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: cnn_dailymail type: cnn_dailymail args: 3.0.0 metrics: - name: Rouge1 type: rouge value: 24.5996 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-cnndm-samsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 1.6422 - Rouge1: 24.5996 - Rouge2: 11.817 - Rougel: 20.3346 - Rougelsum: 23.2155 - Gen Len: 18.9999 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 1.8078 | 1.0 | 71779 | 1.6422 | 24.5996 | 11.817 | 20.3346 | 23.2155 | 18.9999 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
Amitabh/doc-classification
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - BreakoutNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - metrics: - type: mean_reward value: 57.90 +/- 21.41 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: BreakoutNoFrameskip-v4 type: BreakoutNoFrameskip-v4 --- # **DQN** Agent playing **BreakoutNoFrameskip-v4** This is a trained model of a **DQN** agent playing **BreakoutNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo dqn --env BreakoutNoFrameskip-v4 -orga epsil -f logs/ python enjoy.py --algo dqn --env BreakoutNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env BreakoutNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo dqn --env BreakoutNoFrameskip-v4 -f logs/ -orga epsil ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', True), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Amro-Kamal/gpt
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
# My Story model {'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1} Arthur goes to the beach. Arthur is in his beach chair. He is walking along the beach when he starts feeling a pain in his back. Arthur rushes to the doctor. The doctor says he needs a special cast. Arthur is so relieved he tears up. Arthur goes to the beach. Arthur takes his wife to the beach. Arthur's wife has sore muscles. He takes her to the local doctor. The doctor gives her medicine. Arthur and his wife enjoy the beach. Arthur goes to the beach. Arthur always wished he could go to the beach. He always wanted to go by himself. This time he went with his family. When they arrived the place was busy. Arthur was happy to be able to go to the beach. Arthur goes to the beach. Arthur has never been to the beach. When he finally gets there it is very hot. He decides to go to the beach. He enjoys his vacation at the beach. Arthur is happy that he has never been to the beach. Arthur goes to the beach. Arthur goes to the beach. He walks up the beach. He sits on the sand. Arthur lies on the beach. He falls asleep. {'top_p': 0.9, 'top_k': 50, 'temperature': 1, 'repetition_penalty': 1.05} Arthur goes to the beach. Arthur wants to get sandals. He buys a sandal and a pair of sandals. He gets in the water and does some stretching. He finds a few nice waves to surf. After surfing, he is able to buy some new sandals. Arthur goes to the beach. Arthur has been playing in the sand all day long. He decides to go swimming. He spends all afternoon in the water. Finally, Arthur heads home. Arthur finally has a fun beach day! Arthur goes to the beach. Arthur is very happy on vacation. However, Arthur is not excited to be on the beach. He has no idea how to swim. When he finally gets his board out, Arthur begins to get excited. Arthur swam his first time at the beach! Arthur goes to the beach. Arthur decided to go to the beach one day. He was very excited for it and got in the water. He was afraid of the waves and never went in. When he finally got out, he saw that he had gotten bit. Arthur still cried the rest of the day and went home. Arthur goes to the beach. Arthur was going to the beach with his girlfriend. Suddenly, he got a text from his girl that they were getting married. Arthur was so excited and was excited for the big day. When he saw the beach, he saw a beautiful woman. He went on the beach to thank her. {'top_p': 0.9, 'top_k': 40, 'temperature': 0.8, 'repetition_penalty': 1.1} Arthur goes to the beach. Arthur decides he wants to go swimming. He buys his favorite swimsuit and head for the water. As Arthur is about to enter the water, his friends show up. They tell Arthur that he has to pay for the day's swimwear. Arthur still feels guilty, but he doesn't want to be rude. Arthur goes to the beach. Arthur wants to go to the beach. He decides he has to get a job. He goes to work and gets hired. The next day he leaves his house. Arthur returns home and is happy. Arthur goes to the beach. Arthur is out with friends. He decides to go to the beach for a swim. At first he doesn't like the water. Then his friends make fun of him for being so skinny. Arthur finally decides to go swimming after all. Arthur goes to the beach. Arthur has never been to a beach before. He decides to go anyway. On his first day at the beach he gets seasick. He doesn't get any sun on his first day. Afterwards Arthur decides to not go to the beach for another year. Arthur goes to the beach. Arthur is going to the beach for a swim. He has never been to the beach before. As he is taking his first swim, a wave hits him in the head. His mother rushes over and tells him that he got hit by a wave. Arthur is glad he didn't go swimming. {'top_p': 0.9, 'top_k': 40, 'temperature': 0.6, 'repetition_penalty': 1.15} Arthur goes to the beach. It was Arthur's first time going to the beach. He went by himself and didn't know anyone. The water was very cold. After a few minutes, he decided to join a group of people. They had fun at the beach. Arthur goes to the beach. He decides to go for a swim at the beach. The water is very warm and Arthur feels very comfortable in his bathing suit. Suddenly, he notices something strange on the shore. It turns out that someone has been swimming there all day! Arthur is relieved when he finds out who it was. Arthur goes to the beach. Arthur is on a vacation with his family. They go to the beach and swim in the ocean. A shark jumps out at Arthur. He throws the water over it and it gets scared. Arthur decides not to go back to the beach for another year. Arthur goes to the beach. He wants to go for a swim. He doesn't want to get wet. He decides to use a towel. The towel gets soaked and he has to walk home. Arthur is glad he took his time. Arthur goes to the beach. Arthur is on vacation in Hawaii. He decides he wants to go surfing. Arthur takes a day off of work and heads out for the day. When Arthur gets there, he sees that it's very crowded! Arthur decides not to go back home for another two days. {'top_p': 0.9, 'top_k': 40, 'temperature': 0.4, 'repetition_penalty': 1.2} Arthur goes to the beach. Arthur is on a trip with his family. He decides to go for a swim at the ocean. The water is very cold and Arthur feels very hot. His parents take him back home. They tell him that he should have stayed in the house. Arthur goes to the beach. He decides he wants a vacation. He buys his ticket and flies out of town. When he arrives, he is surprised by how beautiful it was! The weather was perfect for him as well. He enjoyed himself immensely at the beach. Arthur goes to the beach. He decides he wants a nice day on the sand. The sun is shining and it's very hot. His friends come over to play with him. They all have fun playing in the water. Arthur feels much better after his day of fun. Arthur goes to the beach. Arthur is on vacation in Florida. He decides he wants to go to the beach. His friends tell him they can't make it for a few days. Arthur agrees and heads out with his friends. They all have fun at the beach. Arthur goes to the beach. He is going for a swim in the ocean. The water was very cold and Arthur didn't want to go. His friends convinced him to go anyway. When he got there, it was freezing! But he still went because he wanted to be with his friends.
Andrija/SRoBERTa-L
[ "pytorch", "roberta", "fill-mask", "hr", "sr", "multilingual", "dataset:oscar", "dataset:srwac", "dataset:leipzig", "transformers", "masked-lm", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
58
null
--- tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: rubert-base-cased_best_finetuned_emotion_experiment_augmented_anger_fear results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rubert-base-cased_best_finetuned_emotion_experiment_augmented_anger_fear This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4568 - Accuracy: 0.8779 - F1: 0.8777 - Precision: 0.8780 - Recall: 0.8779 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=0.0001 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 1.2647 | 1.0 | 69 | 1.0075 | 0.6013 | 0.5671 | 0.6594 | 0.6013 | | 0.9091 | 2.0 | 138 | 0.7853 | 0.7171 | 0.7138 | 0.7169 | 0.7171 | | 0.7305 | 3.0 | 207 | 0.6264 | 0.7829 | 0.7811 | 0.7835 | 0.7829 | | 0.5446 | 4.0 | 276 | 0.4571 | 0.8466 | 0.8465 | 0.8470 | 0.8466 | | 0.4039 | 5.0 | 345 | 0.4035 | 0.8612 | 0.8606 | 0.8612 | 0.8612 | | 0.3144 | 6.0 | 414 | 0.3800 | 0.8653 | 0.8653 | 0.8665 | 0.8653 | | 0.2711 | 7.0 | 483 | 0.3731 | 0.8674 | 0.8673 | 0.8677 | 0.8674 | | 0.2289 | 8.0 | 552 | 0.4041 | 0.8737 | 0.8728 | 0.8746 | 0.8737 | | 0.1944 | 9.0 | 621 | 0.4002 | 0.8789 | 0.8785 | 0.8793 | 0.8789 | | 0.171 | 10.0 | 690 | 0.3939 | 0.8831 | 0.8827 | 0.8839 | 0.8831 | | 0.138 | 11.0 | 759 | 0.4106 | 0.8758 | 0.8754 | 0.8761 | 0.8758 | | 0.1141 | 12.0 | 828 | 0.4200 | 0.8810 | 0.8803 | 0.8804 | 0.8810 | | 0.1141 | 13.0 | 897 | 0.4426 | 0.8758 | 0.8756 | 0.8763 | 0.8758 | | 0.0961 | 14.0 | 966 | 0.4494 | 0.8758 | 0.8754 | 0.8761 | 0.8758 | | 0.0812 | 15.0 | 1035 | 0.4568 | 0.8779 | 0.8777 | 0.8780 | 0.8779 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
AndyJ/clinicalBERT
[ "pytorch", "transformers" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-multilang-cv-ru-night results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-multilang-cv-ru-night This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.6617 - Wer: 0.5097 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 8.725 | 1.58 | 500 | 3.2788 | 1.0 | | 3.1184 | 3.15 | 1000 | 2.4018 | 1.0015 | | 1.2393 | 4.73 | 1500 | 0.6213 | 0.7655 | | 0.6899 | 6.31 | 2000 | 0.5518 | 0.6811 | | 0.5532 | 7.89 | 2500 | 0.5102 | 0.6467 | | 0.4604 | 9.46 | 3000 | 0.4887 | 0.6213 | | 0.4095 | 11.04 | 3500 | 0.4874 | 0.6042 | | 0.3565 | 12.62 | 4000 | 0.4810 | 0.5893 | | 0.3238 | 14.2 | 4500 | 0.5028 | 0.5890 | | 0.3011 | 15.77 | 5000 | 0.5475 | 0.5808 | | 0.2827 | 17.35 | 5500 | 0.5289 | 0.5720 | | 0.2659 | 18.93 | 6000 | 0.5496 | 0.5733 | | 0.2445 | 20.5 | 6500 | 0.5354 | 0.5737 | | 0.2366 | 22.08 | 7000 | 0.5357 | 0.5686 | | 0.2181 | 23.66 | 7500 | 0.5491 | 0.5611 | | 0.2146 | 25.24 | 8000 | 0.5591 | 0.5597 | | 0.2006 | 26.81 | 8500 | 0.5625 | 0.5631 | | 0.1912 | 28.39 | 9000 | 0.5577 | 0.5647 | | 0.1821 | 29.97 | 9500 | 0.5684 | 0.5519 | | 0.1744 | 31.55 | 10000 | 0.5639 | 0.5551 | | 0.1691 | 33.12 | 10500 | 0.5596 | 0.5425 | | 0.1577 | 34.7 | 11000 | 0.5770 | 0.5551 | | 0.1522 | 36.28 | 11500 | 0.5634 | 0.5560 | | 0.1468 | 37.85 | 12000 | 0.5815 | 0.5453 | | 0.1508 | 39.43 | 12500 | 0.6053 | 0.5490 | | 0.1394 | 41.01 | 13000 | 0.6193 | 0.5504 | | 0.1291 | 42.59 | 13500 | 0.5930 | 0.5424 | | 0.1345 | 44.16 | 14000 | 0.6283 | 0.5442 | | 0.1296 | 45.74 | 14500 | 0.6063 | 0.5560 | | 0.1286 | 47.32 | 15000 | 0.6248 | 0.5378 | | 0.1231 | 48.9 | 15500 | 0.6106 | 0.5405 | | 0.1189 | 50.47 | 16000 | 0.6164 | 0.5342 | | 0.1127 | 52.05 | 16500 | 0.6269 | 0.5359 | | 0.112 | 53.63 | 17000 | 0.6170 | 0.5390 | | 0.1113 | 55.21 | 17500 | 0.6489 | 0.5385 | | 0.1023 | 56.78 | 18000 | 0.6826 | 0.5490 | | 0.1069 | 58.36 | 18500 | 0.6147 | 0.5296 | | 0.1008 | 59.94 | 19000 | 0.6414 | 0.5332 | | 0.1018 | 61.51 | 19500 | 0.6454 | 0.5288 | | 0.0989 | 63.09 | 20000 | 0.6603 | 0.5303 | | 0.0944 | 64.67 | 20500 | 0.6350 | 0.5288 | | 0.0905 | 66.25 | 21000 | 0.6386 | 0.5247 | | 0.0837 | 67.82 | 21500 | 0.6563 | 0.5298 | | 0.0868 | 69.4 | 22000 | 0.6375 | 0.5208 | | 0.0827 | 70.98 | 22500 | 0.6401 | 0.5271 | | 0.0797 | 72.56 | 23000 | 0.6723 | 0.5191 | | 0.0847 | 74.13 | 23500 | 0.6610 | 0.5213 | | 0.0818 | 75.71 | 24000 | 0.6774 | 0.5254 | | 0.0793 | 77.29 | 24500 | 0.6543 | 0.5250 | | 0.0758 | 78.86 | 25000 | 0.6607 | 0.5218 | | 0.0755 | 80.44 | 25500 | 0.6599 | 0.5160 | | 0.0722 | 82.02 | 26000 | 0.6683 | 0.5196 | | 0.0714 | 83.6 | 26500 | 0.6941 | 0.5180 | | 0.0684 | 85.17 | 27000 | 0.6581 | 0.5167 | | 0.0686 | 86.75 | 27500 | 0.6651 | 0.5172 | | 0.0712 | 88.33 | 28000 | 0.6547 | 0.5208 | | 0.0697 | 89.91 | 28500 | 0.6555 | 0.5162 | | 0.0696 | 91.48 | 29000 | 0.6678 | 0.5107 | | 0.0686 | 93.06 | 29500 | 0.6630 | 0.5124 | | 0.0671 | 94.64 | 30000 | 0.6675 | 0.5143 | | 0.0668 | 96.21 | 30500 | 0.6602 | 0.5107 | | 0.0666 | 97.79 | 31000 | 0.6611 | 0.5097 | | 0.0664 | 99.37 | 31500 | 0.6617 | 0.5097 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0 - Datasets 2.2.2 - Tokenizers 0.12.1
AnonymousSub/AR_bert-base-uncased
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - metrics: - type: mean_reward value: 767.00 +/- 378.16 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga skyfox -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga skyfox ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000), ('optimize_memory_usage', True), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
AnonymousSub/AR_rule_based_roberta_bert_quadruplet_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - go_emotions metrics: - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-go_emotions_20220608_1 results: - task: name: Text Classification type: text-classification dataset: name: go_emotions type: go_emotions args: simplified metrics: - name: F1 type: f1 value: 0.5575026333429091 - name: Accuracy type: accuracy value: 0.43641725027644673 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-go_emotions_20220608_1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the go_emotions dataset. It achieves the following results on the evaluation set: - Loss: 0.0857 - F1: 0.5575 - Roc Auc: 0.7242 - Accuracy: 0.4364 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | 0.173 | 1.0 | 679 | 0.1074 | 0.4245 | 0.6455 | 0.2976 | | 0.0989 | 2.0 | 1358 | 0.0903 | 0.5199 | 0.6974 | 0.3972 | | 0.0865 | 3.0 | 2037 | 0.0868 | 0.5504 | 0.7180 | 0.4263 | | 0.0806 | 4.0 | 2716 | 0.0860 | 0.5472 | 0.7160 | 0.4233 | | 0.0771 | 5.0 | 3395 | 0.0857 | 0.5575 | 0.7242 | 0.4364 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
AnonymousSub/AR_rule_based_roberta_hier_triplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: mt5-base-finetuned-xsum-data_prep_2021_12_26___t2981_22026.csv___topic_text_google_mt5_base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-finetuned-xsum-data_prep_2021_12_26___t2981_22026.csv___topic_text_google_mt5_base This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 0.7181 - Rouge2: 0.1008 - Rougel: 0.7173 - Rougelsum: 0.7187 - Gen Len: 6.2965 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.0 | 1.0 | 139904 | nan | 0.7181 | 0.1008 | 0.7173 | 0.7187 | 6.2965 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
AnonymousSub/AR_specter
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: NLP-CIC-WFU_Clinical_Cases_NER_Sents_tokenized_mBERT_cased_fine_tuned results: [] widget: - text: 'Se hospitalizó un hombre de 42 años, al que se le había diagnosticado recientemente un carcinoma renal sarcomatoide de células claras metastásico, con fiebre, manejo del dolor por metástasis óseas sintomáticas y para decisiones de tratamiento sistémico de primera línea. El paciente no tenía otros antecedentes. Inicialmente presentó fiebre de 39,0 °C el 12 de marzo de 2020, para la cual recibió ceftriaxona fuera de nuestro centro. El día 6, presentó tos leve y fiebre (38,3°C), lo que llevó a realizar una prueba de PCR en tiempo real para SARS-CoV-2; el resultado fue positivo. El paciente fue ingresado en la sala de COVID-19 de nuestro hospital y se monitorizó estrechamente. La TAC torácica mostró opacidades de vidrio esmerilado bilaterales parcheadas, asociadas al COVID-19 (figura 1). El D7 se le empezó a administrar terapia antivírica con lopinavir y ritonavir (400mg/100mg por vía oral), que se mantuvo durante 5 días, según las directrices locales. El día 8, una disnea súbita y una caída de la saturación obligaron a aumentar el oxígeno a 6 l/min, sin necesidad de ventilación mecánica. Se le administraron dos dosis de tocilizumab, con 8 mg/kg i.v. en cada dosis, separadas 8 horas, con buena tolerancia. Después mostró una mejora clínica, pasando a afebril rápidamente y con un consumo de oxígeno decreciente, que fue retirado por completo el día 12. Una TAC torácica del día 12 confirmó la mejora mostrando regresión parcial de los infiltrados pulmonares y de las opacidades de vidrio esmerilado. La proteína C-reactiva, un marcador indirecto de liberación de citocinas, disminuyó de 225 mg/L a 33 mg/L en 4 días (figura 1). Tras la administración de tocilizumab no se observaron cambios relevantes en las subpoblaciones linfocíticas circulantes y el porcentaje de CD4 + CD25 + linfocitos era alto, antes y después del tocilizumab. Finalmente, el paciente se recuperó totalmente de los síntomas de la COVID-19.' --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NLP-CIC-WFU_Clinical_Cases_NER_Sents_tokenized_mBERT_cased_fine_tuned This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the LivingNER shared task 2022 dataset. It is available at: https://temu.bsc.es/livingner/category/data/ It achieves the following results on the evaluation set: - Loss: 0.0546 - Precision: 0.8574 - Recall: 0.7366 - F1: 0.7924 - Accuracy: 0.9893 ## Model description For a complete description of our system, please go to: https://ceur-ws.org/Vol-3202/livingner-paper13.pdf ## Training and evaluation data Dataset provided by LivingNER shared task, it is available at: https://temu.bsc.es/livingner/category/data/ ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0505 | 1.0 | 2568 | 0.0434 | 0.9399 | 0.6781 | 0.7878 | 0.9886 | | 0.0393 | 2.0 | 5136 | 0.0450 | 0.9384 | 0.6947 | 0.7984 | 0.9892 | | 0.0306 | 3.0 | 7704 | 0.0451 | 0.9497 | 0.6951 | 0.8027 | 0.9897 | | 0.0266 | 4.0 | 10272 | 0.0422 | 0.9646 | 0.6904 | 0.8048 | 0.9900 | | 0.0208 | 5.0 | 12840 | 0.0494 | 0.9576 | 0.6969 | 0.8067 | 0.9902 | | 0.0141 | 6.0 | 15408 | 0.0506 | 0.8407 | 0.7352 | 0.7844 | 0.9890 | | 0.0093 | 7.0 | 17976 | 0.0546 | 0.8574 | 0.7366 | 0.7924 | 0.9893 | ### How to cite this work: Tamayo, A., Burgos, D., & Gelbukh, A. (2022). ParTNER: Paragraph Tuning for Named Entity Recognition on Clinical Cases in Spanish using mBERT+ Rules. In CEUR Workshop Proceedings (Vol. 3202). CEUR-WS. @inproceedings{tamayo2022partner, title={ParTNER: Paragraph Tuning for Named Entity Recognition on Clinical Cases in Spanish using mBERT+ Rules}, author={Tamayo, Antonio and Burgos, Diego and Gelbukh, Alexander} } ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
AnonymousSub/EManuals_BERT_copy
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- language: en thumbnail: http://www.huggingtweets.com/kentcdodds-richardbranson-sikiraamer/1654722520391/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1496777835062648833/3Ao6Xb2a_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1529905780542959616/Ibwrp7VJ_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1410740591483293697/tRbW1XoV_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Amer Sikira & Kent C. Dodds 💿 & Richard Branson</div> <div style="text-align: center; font-size: 14px;">@kentcdodds-richardbranson-sikiraamer</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Amer Sikira & Kent C. Dodds 💿 & Richard Branson. | Data | Amer Sikira | Kent C. Dodds 💿 | Richard Branson | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 3249 | 3215 | | Retweets | 94 | 578 | 234 | | Short tweets | 214 | 507 | 96 | | Tweets kept | 2942 | 2164 | 2885 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jtwa65l2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kentcdodds-richardbranson-sikiraamer's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3vt6qlgf) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3vt6qlgf/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/kentcdodds-richardbranson-sikiraamer') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
AnonymousSub/EManuals_BERT_squad2.0
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- language: en license: bsd-3-clause library_name: pytorch-lightning tags: - pytorch-lightning - audio-to-audio datasets: vctk model_name: nu-wave-x2 --- # nu-wave-x2 ## Model description NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling - [GitHub Repo](https://github.com/mindslab-ai/nuwave) - [Paper](https://arxiv.org/pdf/2104.02321.pdf) This model was trained by contributor [Frederico S. Oliveira](https://huggingface.co/freds0), who graciously [provided the checkpoint](https://github.com/mindslab-ai/nuwave/issues/18) in the original author's GitHub repo. This model was trained using source code written by Junhyeok Lee and Seungu Han under the BSD 3.0 License. All credit goes to them for this work. This model takes in audio at 24kHz and upsamples it to 48kHz. ## Intended uses & limitations #### How to use You can try out this model here: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/bd78af284ef78a960e18a75cb13deab1/nu-wave-x2.ipynb) #### Limitations and bias Provide examples of latent issues and potential remediations. ## Training data Describe the data you used to train the model. If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data. ## Training procedure Preprocessing, hardware used, hyperparameters... ## Eval results You can check out the authors' results at [their project page](https://mindslab-ai.github.io/nuwave/). The project page contains many samples of upsampled audio from the authors' models. ### BibTeX entry and citation info ```bibtex @inproceedings{lee21nuwave, author={Junhyeok Lee and Seungu Han}, title={{NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling}}, year=2021, booktitle={Proc. Interspeech 2021}, pages={1634--1638}, doi={10.21437/Interspeech.2021-36} } ```
AnonymousSub/SR_rule_based_roberta_twostagetriplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: mit --- VGG13 (w/o batch norm) model ported from [torchvision](https://pytorch.org/vision/stable/index.html) for use with [Metalhead.jl](https://github.com/FluxML/Metalhead.jl). The scripts for creating this file can be found at [this gist](https://gist.github.com/darsnack/bfb8594cf5fdc702bdacb66586f518ef). To use this model in Julia, [add the Metalhead.jl package to your environment](https://pkgdocs.julialang.org/v1/managing-packages/#Adding-packages). Then execute: ```julia using Metalhead model = VGG(13; pretrain = true) ```
AnonymousSub/bert_triplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
2022-06-09T03:33:21Z
--- language: zh pipeline_tag: fill-mask tags: - bert license: apache-2.0 --- ## Alibaba PAI BERT Base Chinese This project provides Chinese pre-trained language models and various types of NLP tools. The models are pre-trained on the large-scale corpora hosted by the Alibaba PAI team. It is developed based on the EasyNLP framework (https://github.com/alibaba/EasyNLP). ## Citation If you find the resource is useful, please cite the following paper in your work: ``` @article{easynlp, title = {EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing}, publisher = {arXiv}, author = {Wang, Chengyu and Qiu, Minghui and Zhang, Taolin and Liu, Tingting and Li, Lei and Wang, Jianing and Wang, Ming and Huang, Jun and Lin, Wei}, url = {https://arxiv.org/abs/2205.00258}, year = {2022} } ```