modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
BatuhanYilmaz/bert-finetuned-nerxD
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-03-18T06:57:51Z
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: it datasets: - lmqg/qg_itquad pipeline_tag: text2text-generation tags: - question generation widget: - text: "<hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento." example_title: "Question Generation Example 1" - text: "L' individuazione del petrolio e lo sviluppo di nuovi giacimenti richiedeva in genere <hl> da cinque a dieci anni <hl> prima di una produzione significativa." example_title: "Question Generation Example 2" - text: "il <hl> Giappone <hl> è stato il paese più dipendente dal petrolio arabo." example_title: "Question Generation Example 3" model-index: - name: vocabtrimmer/mt5-small-trimmed-it-30000-itquad-qg results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_itquad type: default args: default metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 7.14 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 21.38 - name: METEOR (Question Generation) type: meteor_question_generation value: 17.1 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 80.58 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 56.53 --- # Model Card of `vocabtrimmer/mt5-small-trimmed-it-30000-itquad-qg` This model is fine-tuned version of [ckpts/mt5-small-trimmed-it-30000](https://huggingface.co/ckpts/mt5-small-trimmed-it-30000) for question generation task on the [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [ckpts/mt5-small-trimmed-it-30000](https://huggingface.co/ckpts/mt5-small-trimmed-it-30000) - **Language:** it - **Training data:** [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="it", model="vocabtrimmer/mt5-small-trimmed-it-30000-itquad-qg") # model prediction questions = model.generate_q(list_context="Dopo il 1971 , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.", list_answer="Dopo il 1971") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-it-30000-itquad-qg") output = pipe("<hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-it-30000-itquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_itquad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 80.58 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_1 | 22.25 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_2 | 14.52 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_3 | 10.03 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_4 | 7.14 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | METEOR | 17.1 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | MoverScore | 56.53 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | ROUGE_L | 21.38 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_itquad - dataset_name: default - input_types: paragraph_answer - output_types: question - prefix_types: None - model: ckpts/mt5-small-trimmed-it-30000 - max_length: 512 - max_length_output: 32 - epoch: 15 - batch: 16 - lr: 0.001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-it-30000-itquad-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
BatuhanYilmaz/marian-finetuned-kde4-en-to-fr
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1744.07 +/- 391.28 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Belin/T5-Terms-and-Conditions
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: afl-3.0 language: - fr --- Finetuned from [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self). # Installation 1. PyTorch installation: https://pytorch.org/ 2. Install transformers: https://huggingface.co/docs/transformers/installation e.g., installation by conda ``` >> conda create -n wav2vec2 python=3.8 >> conda install pytorch cudatoolkit=11.3 -c pytorch >> conda install -c conda-forge transformers ``` # Usage ```python # Load the model and processor from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC import numpy as np import torch model = Wav2Vec2ForCTC.from_pretrained(r'yongjian/wav2vec2-large-a') # Note: PyTorch Model processor = Wav2Vec2Processor.from_pretrained(r'yongjian/wav2vec2-large-a') # Load input np_wav = np.random.normal(size=(16000)).clip(-1, 1) # change it to your sample # Inference sample_rate = processor.feature_extractor.sampling_rate with torch.no_grad(): model_inputs = processor(np_wav, sampling_rate=sample_rate, return_tensors="pt", padding=True) logits = model(model_inputs.input_values, attention_mask=model_inputs.attention_mask).logits # use .cuda() for GPU acceleration pred_ids = torch.argmax(logits, dim=-1).cpu() pred_text = processor.batch_decode(pred_ids) print('Transcription:', pred_text) ``` # Code GitHub Repo: https://github.com/CassiniHuy/wav2vec2_finetune
BenQLange/HF_bot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: keras --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | inner_optimizer.class_name | Custom>RMSprop | | inner_optimizer.config.name | RMSprop | | inner_optimizer.config.weight_decay | None | | inner_optimizer.config.clipnorm | None | | inner_optimizer.config.global_clipnorm | None | | inner_optimizer.config.clipvalue | None | | inner_optimizer.config.use_ema | False | | inner_optimizer.config.ema_momentum | 0.99 | | inner_optimizer.config.ema_overwrite_frequency | 100 | | inner_optimizer.config.jit_compile | True | | inner_optimizer.config.is_legacy_optimizer | False | | inner_optimizer.config.learning_rate | 0.0010000000474974513 | | inner_optimizer.config.rho | 0.9 | | inner_optimizer.config.momentum | 0.0 | | inner_optimizer.config.epsilon | 1e-07 | | inner_optimizer.config.centered | False | | dynamic | True | | initial_scale | 32768.0 | | dynamic_growth_steps | 2000 | | training_precision | mixed_float16 | ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
Beri/legal-qa
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- tags: - generated_from_trainer metrics: - precision - recall model-index: - name: helper2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # helper2 This model is a fine-tuned version of [smallbenchnlp/roberta-small](https://huggingface.co/smallbenchnlp/roberta-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0025 - Acc: 0.9997 - Precision: 1.0 - Recall: 0.9994 - F1 score: 0.9997 - Auc: 0.9997 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 0.5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Acc | Precision | Recall | F1 score | Auc | |:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:------:|:--------:|:------:| | 0.0041 | 0.49 | 500 | 0.0025 | 0.9997 | 1.0 | 0.9994 | 0.9997 | 0.9997 | ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Tokenizers 0.13.2
Bharathdamu/wav2vec2-large-xls-r-300m-hindi3-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: my_awesome_model results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.93156 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2347 - Accuracy: 0.9316 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.231 | 1.0 | 1563 | 0.1932 | 0.9252 | | 0.1513 | 2.0 | 3126 | 0.2347 | 0.9316 | ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
Bharathdamu/wav2vec2-model-hindibhasha
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### sketch-of-an-animal Dreambooth model trained by laserchalk with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
Biasface/DDDC2
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of girl tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - hpAreS/my-girl-model These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of girl using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
BigSalmon/Flowberta
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- license: apache-2.0 tags: - text2text-generation pipeline_tag: text2text-generation language: - zh - en widget: - text: |- Human: 使用python写一个二分查找的代码 Assistant: example_title: code zh - text: >- Human: Classify the sentiment of the following sentence into Positive, Neutral, or Negative: Super excited about teaching Stanford’s first course on Large Language Models! Check the syllabus out here Assistant: example_title: sentiment en - text: |- Human: 今天天气怎么样,把这句话翻译成英语 Assistant: example_title: translation zh-en - text: |- Human: 怎么让自己精力充沛,列5点建议 Assistant: example_title: brainstorming zh - text: |- Human: 请以『春天的北京』为题写一首诗歌 Assistant: example_title: generation zh - text: |- Human: 明天就假期结束了,有点抗拒上班,应该怎么办? Assistant: example_title: brainstorming zh - text: |- Human: 父母都姓吴,取一些男宝宝和女宝宝的名字 Assistant: example_title: brainstorming zh - text: |- Human: 推荐几本金庸的武侠小说 Assistant: example_title: brainstorming zh --- # Model Card for Model ID ## Model description BELLE is based on Bloomz-7b1-mt and finetuned with 0.6M Chinese data combined with 50,000 pieces of English data from the open source Stanford-Alpaca, resulting in good Chinese instruction understanding and response generation capabilities. The code of Chinese data generation and other detailed information can be found in our Github project repository: https://github.com/LianjiaTech/BELLE. We trained models using datasets of different sizes (200,000, 600,000, and 1,000,000 samples) for instruction learning, and we obtained different model versions as shown below: | Datasize| 200,000 | 600,000 | 1,000,000 | | ----- | ----- | ----- | ----- | | Finetuned Model | [BELLE-7B-0.2M](https://huggingface.co/BelleGroup/BELLE-7B-0.2M) | [BELLE-7B-0.6M](https://huggingface.co/BelleGroup/BELLE-7B-0.6M) | [BELLE-7B-1M](https://huggingface.co/BelleGroup/BELLE-7B-1M) | ## Training hyper-parameters | Parameter | Value | | ------ | ------ | | Batch size | 64 | | Learning rate | 3e-6 | | Epochs | 3 | |Weight_decay | 0.001 | |Warmup_rate | 0.1 | |LR_scheduler | linear | ## Use model Please note that the input should be formatted as follows in both **training** and **inference**. ``` python Human: {input} \n\nAssistant: ``` BELLE can be easily loaded with AutoModelForCausalLM. ``` python from transformers import AutoTokenizer, AutoModelForCausalLM import sys model_path = "./" # You can modify the path for storing the local model model = AutoModelForCausalLM.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path) print("Human:") line = input() while line: inputs = 'Human: ' + line.strip() + '\n\nAssistant:' input_ids = tokenizer(inputs, return_tensors="pt").input_ids outputs = model.generate(input_ids, max_new_tokens=200, do_sample = True, top_k = 30, top_p = 0.85, temperature = 0.35, repetition_penalty=1.2) rets = tokenizer.batch_decode(outputs, skip_special_tokens=True) print("Assistant:\n" + rets[0].strip().replace(inputs, "")) print("\n------------------------------------------------\nHuman:") line = input() ``` The above example did not utilize the context from previous turns of the conversation. If you want to perform multi-turn dialogue understanding, you can concatenate the previous turns of the conversation in the format of Human:{input}\n\nAssistant:{output} and input it to the generate function. ## Limitations There still exists a few issues in the model trained on current base model and data: 1. The model might generate factual errors when asked to follow instructions related to facts. 2. Occasionally generates harmful responses since the model still struggles to identify potential harmful instructions. 3. Needs improvements on reasoning and coding. Since the model still has its limitations, we require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed. ## Some use cases | Human | Assitant | | ----- | ------ | | 使用python写字符串翻转的函数 | 可以使用Python内置方法 `reversed()` 来实现字符串翻转,代码如下:<br/><br/>def reverse_string(s):<br/>&nbsp;&nbsp;&nbsp;&nbsp;return s[::-1]<br/> <br/>print(reverse_string("Hello, world!"))<br/># 输出:'!dlrow,olleH'<br/> | | 做难而正确的事,翻译成英语 | Do difficult and correct things. | | The current Alpaca model is fine-tuned from a 7B LLaMA model on 52K instruction-following data generated by the techniques in the Self-Instruct paper, with some modifications that we discuss in the next section. In a preliminary human evaluation, we found that the Alpaca 7B model behaves similarly to the text-davinci-003 model on the Self-Instruct instruction-following evaluation suite. How many instruction-following data generated? | 52K | | 给一个北京三日游的旅行攻略 | Day 1:<br/>- 参观故宫博物院,了解中国历史和文化。<br/>- 在天安门广场观看升旗仪式。 <br/>- 到颐和园游览美丽的园林景观。<br/>- 品尝烤鸭和其他地道美食。<br/>- 去王府井购物中心逛街购物。 <br/><br/>Day 2:<br/>- 前往长城游玩,感受古代中国的壮丽景色。<br/>- 乘坐索道缆车到达八达岭长城,欣赏美景。<br/>- 晚上去簋街体验当地的小吃文化。<br/><br/>Day 3:<br/>- 参观天坛公园,观赏古建筑群和祈年殿。<br/>- 到鸟巢和水立方观看奥运会比赛。<br/>- 午餐后到798艺术区参观当代艺术家的工作室。 | | 来一个温馨的睡前故事 | 从前有一个小兔子,它住在森林里。每天晚上,小兔子都会在树下打盹儿。但是有一天晚上,一只大灰狼来了!他悄悄地溜进小兔子的洞穴里,想要偷走它的胡萝卜。<br/>然而,小兔子并没有被吓到。相反,它用自己的智慧和勇气保护了自己。它在洞口放了一堆胡萝卜来吸引大灰狼,然后趁机逃脱了。从那天起,小兔子学会了在危险时保持镇静并采取适当的行动 | | 如何快速提升自己的写作能力 | 1. 阅读大量不同类型的文学作品,包括小说、散文和诗歌等。<br/>2. 练习写作,每天写一些东西并不断修改和完善。<br/>3. 参加写作班或请教专业作家以获得反馈和建议。<br/>4. 利用各种工具和技术来提高写作效率和质量,例如语法检查器、拼写检查器和自动润色功能等。 | | 北京大学是哪一年成立的? | 北京大学成立于1898年 | *** ## 模型描述 BELLE模型以Bloomz-7b1-mt为基础,在 0.2M/0.6M/1.0M 条中文数据上,结合Stanford-Alpaca开源的5万条英文数据上训练,取得了较好的中文instruction理解以及response生成能力。 中文数据生成代码及其他详细信息,位于我们的Github项目仓库: https://github.com/LianjiaTech/BELLE 我们采取了不同大小规模(20万、60万和100万样本)的指令学习的数据集训练模型,我们得到不同的模型版本如下所示: | Datasize| 200,000 | 600,000 | 1,000,000 | | ----- | ----- | ----- | ----- | | Finetuned Model | [BELLE-7B-0.2M](https://huggingface.co/BelleGroup/BELLE-7B-0.2M) | [BELLE-7B-0.6M](https://huggingface.co/BelleGroup/BELLE-7B-0.6M) | [BELLE-7B-1M](https://huggingface.co/BelleGroup/BELLE-7B-1M) | ## 模型训练超参数 | 参数 | 值 | | ------ | ------ | | Batch size | 64 | | Learning rate | 3e-6 | | Epochs | 3 | |Weight_decay | 0.001 | |Warmup_rate | 0.1 | |LR_scheduler | linear | ## 使用模型 请注意,在训练和推理时,模型的输入应该处理成如下形式: ``` python Human: {input} \n\nAssistant: ``` 通过AutoModelForCausalLM即可直接载入模型并使用。 ``` python from transformers import AutoTokenizer, AutoModelForCausalLM import sys model_path = "./" # You can modify the path for storing the local model model = AutoModelForCausalLM.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path) print("Human:") line = input() while line: inputs = 'Human: ' + line.strip() + '\n\nAssistant:' input_ids = tokenizer(inputs, return_tensors="pt").input_ids outputs = model.generate(input_ids, max_new_tokens=200, do_sample = True, top_k = 30, top_p = 0.85, temperature = 0.35, repetition_penalty=1.2) rets = tokenizer.batch_decode(outputs, skip_special_tokens=True) print("Assistant:\n" + rets[0].strip().replace(inputs, "")) print("\n------------------------------------------------\nHuman:") line = input() ``` 上面样例没有利用对话上文,如果你想做多轮对话理解,可以将对话上文按照Human:{input}\n\nAssistant:{output} 方式拼接起来输入给generate函数 ## 局限性和使用限制 基于当前数据和基础模型训练得到的SFT模型,在效果上仍存在以下问题: 1. 在涉及事实性的指令上可能会产生违背事实的错误回答。 2. 对于具备危害性的指令无法很好的鉴别,由此会产生危害性言论。 3. 在一些涉及推理、代码等场景下模型的能力仍有待提高。 基于以上模型局限性,我们要求开发者仅将我们开源的代码、数据、模型及后续用此项目生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。 ## 一些示例 | Human | Assitant | | ----- | ------ | | 使用python写字符串翻转的函数 | 可以使用Python内置方法 `reversed()` 来实现字符串翻转,代码如下:<br/><br/>def reverse_string(s):<br/>&nbsp;&nbsp;&nbsp;&nbsp;return s[::-1]<br/> <br/>print(reverse_string("Hello, world!"))<br/># 输出:'!dlrow,olleH'<br/> | | 做难而正确的事,翻译成英语 | Do difficult and correct things. | | The current Alpaca model is fine-tuned from a 7B LLaMA model on 52K instruction-following data generated by the techniques in the Self-Instruct paper, with some modifications that we discuss in the next section. In a preliminary human evaluation, we found that the Alpaca 7B model behaves similarly to the text-davinci-003 model on the Self-Instruct instruction-following evaluation suite. How many instruction-following data generated? | 52K | | 给一个北京三日游的旅行攻略 | Day 1:<br/>- 参观故宫博物院,了解中国历史和文化。<br/>- 在天安门广场观看升旗仪式。 <br/>- 到颐和园游览美丽的园林景观。<br/>- 品尝烤鸭和其他地道美食。<br/>- 去王府井购物中心逛街购物。 <br/><br/>Day 2:<br/>- 前往长城游玩,感受古代中国的壮丽景色。<br/>- 乘坐索道缆车到达八达岭长城,欣赏美景。<br/>- 晚上去簋街体验当地的小吃文化。<br/><br/>Day 3:<br/>- 参观天坛公园,观赏古建筑群和祈年殿。<br/>- 到鸟巢和水立方观看奥运会比赛。<br/>- 午餐后到798艺术区参观当代艺术家的工作室。 | | 来一个温馨的睡前故事 | 从前有一个小兔子,它住在森林里。每天晚上,小兔子都会在树下打盹儿。但是有一天晚上,一只大灰狼来了!他悄悄地溜进小兔子的洞穴里,想要偷走它的胡萝卜。<br/>然而,小兔子并没有被吓到。相反,它用自己的智慧和勇气保护了自己。它在洞口放了一堆胡萝卜来吸引大灰狼,然后趁机逃脱了。从那天起,小兔子学会了在危险时保持镇静并采取适当的行动 | | 如何快速提升自己的写作能力 | 1. 阅读大量不同类型的文学作品,包括小说、散文和诗歌等。<br/>2. 练习写作,每天写一些东西并不断修改和完善。<br/>3. 参加写作班或请教专业作家以获得反馈和建议。<br/>4. 利用各种工具和技术来提高写作效率和质量,例如语法检查器、拼写检查器和自动润色功能等。 | | 北京大学是哪一年成立的? | 北京大学成立于1898年 |
BigSalmon/FormalBerta2
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
null
--- license: apache-2.0 tags: - text2text-generation pipeline_tag: text2text-generation language: - zh - en widget: - text: |- Human: 使用python写一个二分查找的代码 Assistant: example_title: code zh - text: >- Human: Classify the sentiment of the following sentence into Positive, Neutral, or Negative: Super excited about teaching Stanford’s first course on Large Language Models! Check the syllabus out here Assistant: example_title: sentiment en - text: |- Human: 今天天气怎么样,把这句话翻译成英语 Assistant: example_title: translation zh-en - text: |- Human: 怎么让自己精力充沛,列5点建议 Assistant: example_title: brainstorming zh - text: |- Human: 请以『春天的北京』为题写一首诗歌 Assistant: example_title: generation zh - text: |- Human: 明天就假期结束了,有点抗拒上班,应该怎么办? Assistant: example_title: brainstorming zh - text: |- Human: 父母都姓吴,取一些男宝宝和女宝宝的名字 Assistant: example_title: brainstorming zh - text: |- Human: 推荐几本金庸的武侠小说 Assistant: example_title: brainstorming zh --- # Model Card for Model ID ## Model description BELLE is based on Bloomz-7b1-mt and finetuned with 1M Chinese data combined with 50,000 pieces of English data from the open source Stanford-Alpaca, resulting in good Chinese instruction understanding and response generation capabilities. The code of Chinese data generation and other detailed information can be found in our Github project repository: https://github.com/LianjiaTech/BELLE. We trained models using datasets of different sizes (200,000, 600,000, and 1,000,000 samples) for instruction learning, and we obtained different model versions as shown below: | Datasize| 200,000 | 600,000 | 1,000,000 | | ----- | ----- | ----- | ----- | | Finetuned Model | [BELLE-7B-0.2M](https://huggingface.co/BelleGroup/BELLE-7B-0.2M) | [BELLE-7B-0.6M](https://huggingface.co/BelleGroup/BELLE-7B-0.6M) | [BELLE-7B-1M](https://huggingface.co/BelleGroup/BELLE-7B-1M) | ## Training hyper-parameters | Parameter | Value | | ------ | ------ | | Batch size | 64 | | Learning rate | 3e-6 | | Epochs | 3 | |Weight_decay | 0.001 | |Warmup_rate | 0.1 | |LR_scheduler | linear | ## Use model Please note that the input should be formatted as follows in both **training** and **inference**. ``` python Human: {input} \n\nAssistant: ``` BELLE can be easily loaded with AutoModelForCausalLM. ``` python from transformers import AutoTokenizer, AutoModelForCausalLM import sys model_path = "./" # You can modify the path for storing the local model model = AutoModelForCausalLM.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path) print("Human:") line = input() while line: inputs = 'Human: ' + line.strip() + '\n\nAssistant:' input_ids = tokenizer(inputs, return_tensors="pt").input_ids outputs = model.generate(input_ids, max_new_tokens=200, do_sample = True, top_k = 30, top_p = 0.85, temperature = 0.35, repetition_penalty=1.2) rets = tokenizer.batch_decode(outputs, skip_special_tokens=True) print("Assistant:\n" + rets[0].strip().replace(inputs, "")) print("\n------------------------------------------------\nHuman:") line = input() ``` The above example did not utilize the context from previous turns of the conversation. If you want to perform multi-turn dialogue understanding, you can concatenate the previous turns of the conversation in the format of Human:{input}\n\nAssistant:{output} and input it to the generate function ## Limitations There still exists a few issues in the model trained on current base model and data: 1. The model might generate factual errors when asked to follow instructions related to facts. 2. Occasionally generates harmful responses since the model still struggles to identify potential harmful instructions. 3. Needs improvements on reasoning and coding. Since the model still has its limitations, we require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed. ## Some use cases | Human | Assitant | | ----- | ------ | | 今天天气怎么样,把这句话翻译成英语 | What is the weather like today? | | Classify the sentiment of the following sentence into Positive, Neutral, or Negative: <br/>Super excited about teaching Stanford’s first course on Large Language Models! Check the syllabus out here | Positive | | 怎么让自己精力充沛,列5点建议 | 1. 充足的睡眠:每晚保证7-8小时的充足睡眠可以让你保持精神饱满。<br/><br/>2. 健康饮食:均衡的营养摄入可以让你的身体得到充分的能量和养分。<br/><br/>3. 多喝水:在体内保持水分平衡非常重要,多饮水可以帮助你维持身体的水平衡。<br/><br/>4. 适量的运动:适当的锻炼可以使你的身体产生更多的能量和活力,同时也可以帮助你缓解压力。<br/><br/>5. 管理时间:合理规划时间可以使你有更多的时间去休息、放松和享受生活。 | | 请以『春天的北京』为题写一首诗歌 | 春风拂面,花香四溢,<br/>万物复苏,生机勃勃。<br/>北京的春天,美不胜收,<br/>阳光明媚,鸟语悠扬。<br/><br/>公园里,绿树成荫,<br/>鲜花盛开,蝴蝶翩跹。<br/>人们漫步其中,享受着自然的美好,<br/>感受到了生命的力量与活力。<br/><br/>故宫内,古建筑保存完好,<br/>皇家气息扑鼻而来。<br/>游客们在这里,领略了历史的厚重,<br/>也感受到了文化的魅力与神奇。<br/><br/>夜晚的城市,灯火辉煌,<br/>车水马龙,人来人往。<br/>北京的夜景,令人心醉神迷,<br/>让人沉浸在繁华和宁静之中。 | | 明天就假期结束了,有点抗拒上班,应该怎么办? | 1. 提前规划好自己的工作日程和任务,避免拖延。<br/>2. 和同事或朋友交流一下,听听他们的建议和意见,让自己更加放松心情。<br/>3. 给自己设定一些小目标,完成它们后可以获得成就感和满足感。<br/>4. 在工作中寻找乐趣和挑战,让自己更有动力去完成任务。<br/>5. 如果还是无法克服情绪上的不适,可以考虑寻求心理咨询帮助。 | | 父母都姓吴,取一些男宝宝和女宝宝的名字 | - 男宝宝:吴雨晨、吴子墨、吴泽宇、吴博文、吴瑞阳。<br/>-女宝宝:吴思婷、吴雅晴、吴美娜、吴欣怡、吴婧仪。 | | 写一篇散文诗,主题是春雨,想象自己是春雨,和中国古代诗人交流 | 我是一滴春雨<br/>落在了你的窗前<br/>你静静地倾听着我的声音<br/>仿佛在思考着什么<br/><br/>我想起了唐代诗人白居易的《赋得古原草送别》<br/>他写道:“离离原上草,一岁一枯荣。<br/>野火烧不尽,春风吹又生。”<br/>是啊,春天来了,万物复苏<br/>而我也像这些草一样,<br/>经历了一年的成长和凋零<br/><br/>现在,我再次回到大地母亲的怀抱中<br/>感受到了她的温暖和慈爱<br/>她用柔和的风儿吹拂着我<br/>让我感受到生命的活力和希望<br/><br/>我想起了《红楼梦》中黛玉所说:“花谢花飞花满天,红消香断有谁怜?”<br/>是啊,人生就像这花朵一般<br/>总会有盛开和凋零的时候<br/>但只要我们心存善念、努力奋斗<br/>就一定能够迎接更美好的未来<br/><br/>于是我决定继续前进<br/>为自己的生命注入新的活力<br/>让自己在未来的日子里更加精彩<br/><br/>中国古代的诗歌与现代的散文诗<br/>有着不同的表达方式和文化内涵<br/>但我相信它们都蕴含着深刻的哲理和启示<br/>让我们在人生的道路上不断前行 | *** ## 模型描述 BELLE模型以Bloomz-7b1-mt为基础,在 1.0M 条中文数据上,结合Stanford-Alpaca开源的5万条英文数据上训练,取得了较好的中文instruction理解以及response生成能力。 中文数据生成代码及其他详细信息,位于我们的Github项目仓库: https://github.com/LianjiaTech/BELLE 我们采取了不同大小规模(20万、60万和100万样本)的指令学习的数据集训练模型,我们得到不同的模型版本如下所示: | Datasize| 200,000 | 600,000 | 1,000,000 | | ----- | ----- | ----- | ----- | | Finetuned Model | [BELLE-7B-0.2M](https://huggingface.co/BelleGroup/BELLE-7B-0.2M) | [BELLE-7B-0.6M](https://huggingface.co/BelleGroup/BELLE-7B-0.6M) | [BELLE-7B-1M](https://huggingface.co/BelleGroup/BELLE-7B-1M) | ## 模型训练超参数 | 参数 | 值 | | ------ | ------ | | Batch size | 64 | | Learning rate | 3e-6 | | Epochs | 3 | |Weight_decay | 0.001 | |Warmup_rate | 0.1 | |LR_scheduler | linear | ## 使用模型 请注意,在训练和推理时,模型的输入应该处理成如下形式: ``` python Human: {input} \n\nAssistant: ``` 通过AutoModelForCausalLM即可直接载入模型并使用。 ``` python from transformers import AutoTokenizer, AutoModelForCausalLM import sys model_path = "./" # You can modify the path for storing the local model model = AutoModelForCausalLM.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path) print("Human:") line = input() while line: inputs = 'Human: ' + line.strip() + '\n\nAssistant:' input_ids = tokenizer(inputs, return_tensors="pt").input_ids outputs = model.generate(input_ids, max_new_tokens=200, do_sample = True, top_k = 30, top_p = 0.85, temperature = 0.35, repetition_penalty=1.2) rets = tokenizer.batch_decode(outputs, skip_special_tokens=True) print("Assistant:\n" + rets[0].strip().replace(inputs, "")) print("\n------------------------------------------------\nHuman:") line = input() ``` 上面样例没有利用对话上文,如果你想做多轮对话理解,可以将对话上文按照Human:{input}\n\nAssistant:{output} 方式拼接起来输入给generate函数 ## 局限性和使用限制 基于当前数据和基础模型训练得到的SFT模型,在效果上仍存在以下问题: 1. 在涉及事实性的指令上可能会产生违背事实的错误回答。 2. 对于具备危害性的指令无法很好的鉴别,由此会产生危害性言论。 3. 在一些涉及推理、代码等场景下模型的能力仍有待提高。 基于以上模型局限性,我们要求开发者仅将我们开源的代码、数据、模型及后续用此项目生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。 ## 一些示例 | Human | Assitant | | ----- | ------ | | 今天天气怎么样,把这句话翻译成英语 | What is the weather like today? | | Classify the sentiment of the following sentence into Positive, Neutral, or Negative: <br/>Super excited about teaching Stanford’s first course on Large Language Models! Check the syllabus out here | Positive | | 怎么让自己精力充沛,列5点建议 | 1. 充足的睡眠:每晚保证7-8小时的充足睡眠可以让你保持精神饱满。<br/><br/>2. 健康饮食:均衡的营养摄入可以让你的身体得到充分的能量和养分。<br/><br/>3. 多喝水:在体内保持水分平衡非常重要,多饮水可以帮助你维持身体的水平衡。<br/><br/>4. 适量的运动:适当的锻炼可以使你的身体产生更多的能量和活力,同时也可以帮助你缓解压力。<br/><br/>5. 管理时间:合理规划时间可以使你有更多的时间去休息、放松和享受生活。 | | 请以『春天的北京』为题写一首诗歌 | 春风拂面,花香四溢,<br/>万物复苏,生机勃勃。<br/>北京的春天,美不胜收,<br/>阳光明媚,鸟语悠扬。<br/><br/>公园里,绿树成荫,<br/>鲜花盛开,蝴蝶翩跹。<br/>人们漫步其中,享受着自然的美好,<br/>感受到了生命的力量与活力。<br/><br/>故宫内,古建筑保存完好,<br/>皇家气息扑鼻而来。<br/>游客们在这里,领略了历史的厚重,<br/>也感受到了文化的魅力与神奇。<br/><br/>夜晚的城市,灯火辉煌,<br/>车水马龙,人来人往。<br/>北京的夜景,令人心醉神迷,<br/>让人沉浸在繁华和宁静之中。 | | 明天就假期结束了,有点抗拒上班,应该怎么办? | 1. 提前规划好自己的工作日程和任务,避免拖延。<br/>2. 和同事或朋友交流一下,听听他们的建议和意见,让自己更加放松心情。<br/>3. 给自己设定一些小目标,完成它们后可以获得成就感和满足感。<br/>4. 在工作中寻找乐趣和挑战,让自己更有动力去完成任务。<br/>5. 如果还是无法克服情绪上的不适,可以考虑寻求心理咨询帮助。 | | 父母都姓吴,取一些男宝宝和女宝宝的名字 | - 男宝宝:吴雨晨、吴子墨、吴泽宇、吴博文、吴瑞阳。<br/>-女宝宝:吴思婷、吴雅晴、吴美娜、吴欣怡、吴婧仪。 | | 写一篇散文诗,主题是春雨,想象自己是春雨,和中国古代诗人交流 | 我是一滴春雨<br/>落在了你的窗前<br/>你静静地倾听着我的声音<br/>仿佛在思考着什么<br/><br/>我想起了唐代诗人白居易的《赋得古原草送别》<br/>他写道:“离离原上草,一岁一枯荣。<br/>野火烧不尽,春风吹又生。”<br/>是啊,春天来了,万物复苏<br/>而我也像这些草一样,<br/>经历了一年的成长和凋零<br/><br/>现在,我再次回到大地母亲的怀抱中<br/>感受到了她的温暖和慈爱<br/>她用柔和的风儿吹拂着我<br/>让我感受到生命的活力和希望<br/><br/>我想起了《红楼梦》中黛玉所说:“花谢花飞花满天,红消香断有谁怜?”<br/>是啊,人生就像这花朵一般<br/>总会有盛开和凋零的时候<br/>但只要我们心存善念、努力奋斗<br/>就一定能够迎接更美好的未来<br/><br/>于是我决定继续前进<br/>为自己的生命注入新的活力<br/>让自己在未来的日子里更加精彩<br/><br/>中国古代的诗歌与现代的散文诗<br/>有着不同的表达方式和文化内涵<br/>但我相信它们都蕴含着深刻的哲理和启示<br/>让我们在人生的道路上不断前行 |
BigSalmon/FormalBerta3
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image inference: true extra_gated_prompt: |- This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license extra_gated_heading: Please read the LICENSE to access this model --- # Stable Diffusion v1-5 Model Card Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion). The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2) checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion). ### Diffusers ```py from diffusers import StableDiffusionPipeline import torch model_id = "runwayml/stable-diffusion-v1-5" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion) ### Original GitHub Repository 1. Download the weights - [v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) - 4.27GB, ema-only weight. uses less VRAM - suitable for inference - [v1-5-pruned.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt) - 7.7GB, ema+non-ema weights. uses more VRAM - suitable for fine-tuning 2. Follow instructions [here](https://github.com/runwayml/stable-diffusion). ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487). - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without additional safety mechanisms and considerations. - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are primarily limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. ### Safety Module The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers. This checker works by checking model outputs against known hard-coded NSFW concepts. The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter. Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images. The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-2B (en) and subsets thereof (see next section) **Training Procedure** Stable Diffusion v1-5 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through a ViT-L/14 text-encoder. - The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. Currently six Stable Diffusion checkpoints are provided, which were trained as follows. - [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en). 194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`). - [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`. 515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)). - [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [`stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything. - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 2 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling steps show the relative improvements of the checkpoints: ![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-1-to-v1-5.png) Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 150000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq. ## Citation ```bibtex @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ``` *This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
BigSalmon/FormalRobertaa
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Ellipsoul/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
BigSalmon/FroBurta
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer datasets: - germanquad model-index: - name: bert-base-german-cased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-german-cased-finetuned-squad This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the germanquad dataset. It achieves the following results on the evaluation set: - Loss: 2.6468 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 180 | 1.9940 | | No log | 2.0 | 360 | 1.7339 | | 2.5168 | 3.0 | 540 | 1.8300 | | 2.5168 | 4.0 | 720 | 1.9861 | | 2.5168 | 5.0 | 900 | 2.1922 | | 0.6891 | 6.0 | 1080 | 2.3542 | | 0.6891 | 7.0 | 1260 | 2.4410 | | 0.6891 | 8.0 | 1440 | 2.5836 | | 0.2347 | 9.0 | 1620 | 2.6020 | | 0.2347 | 10.0 | 1800 | 2.6468 | ### Framework versions - Transformers 4.27.2 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
BigSalmon/GPTNeo350MInformalToFormalLincoln2
[ "pytorch", "gpt_neo", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2023-03-18T09:16:30Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.63 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="jackhhhh/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
BigSalmon/GPTT
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: mit tags: - generated_from_trainer metrics: - precision - recall model-index: - name: combine-20-vsfc-xlm-r results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # combine-20-vsfc-xlm-r This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2322 - Precision: 0.9414 - Recall: 0.9438 - F1 Weighted: 0.9409 - F1 Macro: 0.8449 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 Weighted | F1 Macro | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:-----------:|:--------:| | 0.9866 | 0.12 | 25 | 0.8014 | 0.7299 | 0.7088 | 0.6862 | 0.4808 | | 0.7581 | 0.24 | 50 | 0.5130 | 0.8573 | 0.8920 | 0.8717 | 0.6086 | | 0.5523 | 0.36 | 75 | 0.4340 | 0.8637 | 0.9021 | 0.8812 | 0.6154 | | 0.4144 | 0.47 | 100 | 0.3586 | 0.8664 | 0.9052 | 0.8841 | 0.6176 | | 0.4314 | 0.59 | 125 | 0.2651 | 0.8946 | 0.9172 | 0.9009 | 0.6580 | | 0.3391 | 0.71 | 150 | 0.2658 | 0.9078 | 0.9204 | 0.9116 | 0.7174 | | 0.3441 | 0.83 | 175 | 0.2518 | 0.9198 | 0.9286 | 0.9190 | 0.7342 | | 0.3624 | 0.95 | 200 | 0.2484 | 0.9273 | 0.9318 | 0.9173 | 0.7057 | | 0.2703 | 1.07 | 225 | 0.2388 | 0.9348 | 0.9356 | 0.9261 | 0.7638 | | 0.2913 | 1.18 | 250 | 0.2496 | 0.9281 | 0.9311 | 0.9209 | 0.7485 | | 0.3268 | 1.3 | 275 | 0.2504 | 0.9317 | 0.9349 | 0.9279 | 0.7856 | | 0.2692 | 1.42 | 300 | 0.2163 | 0.9277 | 0.9305 | 0.9239 | 0.7874 | | 0.2913 | 1.54 | 325 | 0.2264 | 0.9270 | 0.9311 | 0.9256 | 0.7919 | | 0.2416 | 1.66 | 350 | 0.2304 | 0.9371 | 0.9387 | 0.9333 | 0.8128 | | 0.2158 | 1.78 | 375 | 0.2419 | 0.9359 | 0.9381 | 0.9338 | 0.8206 | | 0.2593 | 1.9 | 400 | 0.2269 | 0.9382 | 0.9419 | 0.9370 | 0.8136 | | 0.2331 | 2.01 | 425 | 0.2534 | 0.9364 | 0.9387 | 0.9341 | 0.8172 | | 0.2067 | 2.13 | 450 | 0.2199 | 0.9404 | 0.9438 | 0.9407 | 0.8330 | | 0.2102 | 2.25 | 475 | 0.2429 | 0.9288 | 0.9305 | 0.9270 | 0.8193 | | 0.1696 | 2.37 | 500 | 0.2271 | 0.9378 | 0.9406 | 0.9382 | 0.8353 | | 0.2598 | 2.49 | 525 | 0.2175 | 0.9360 | 0.9394 | 0.9370 | 0.8256 | | 0.243 | 2.61 | 550 | 0.1947 | 0.9457 | 0.9482 | 0.9458 | 0.8520 | | 0.1944 | 2.73 | 575 | 0.2052 | 0.9419 | 0.9450 | 0.9419 | 0.8354 | | 0.1839 | 2.84 | 600 | 0.2186 | 0.9405 | 0.9425 | 0.9389 | 0.8358 | | 0.1829 | 2.96 | 625 | 0.1944 | 0.9455 | 0.9476 | 0.9456 | 0.8583 | | 0.1705 | 3.08 | 650 | 0.2410 | 0.9355 | 0.9387 | 0.9348 | 0.8223 | | 0.1258 | 3.2 | 675 | 0.2225 | 0.9381 | 0.9400 | 0.9386 | 0.8475 | | 0.11 | 3.32 | 700 | 0.2311 | 0.9410 | 0.9438 | 0.9417 | 0.8431 | | 0.1619 | 3.44 | 725 | 0.2129 | 0.9411 | 0.9431 | 0.9419 | 0.8470 | | 0.1698 | 3.55 | 750 | 0.2254 | 0.9388 | 0.9413 | 0.9395 | 0.8419 | | 0.1495 | 3.67 | 775 | 0.2185 | 0.9408 | 0.9438 | 0.9403 | 0.8337 | | 0.0989 | 3.79 | 800 | 0.2322 | 0.9414 | 0.9438 | 0.9409 | 0.8449 | ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
BigSalmon/InformalToFormalLincoln16
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: other language: - ja tags: - 菅義偉 - Former Japanese Prime Minister - Suga - SugaYoshihide - Yoshihide ---
BigSalmon/InformalToFormalLincoln18
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2023-03-18T09:43:32Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
BigSalmon/InformalToFormalLincoln20
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: mit tags: - generated_from_trainer metrics: - precision - recall model-index: - name: combine-40-vsfc-xlm-r results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # combine-40-vsfc-xlm-r This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2145 - Precision: 0.9368 - Recall: 0.9406 - F1 Weighted: 0.9376 - F1 Macro: 0.8232 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 Weighted | F1 Macro | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:-----------:|:--------:| | 0.9949 | 0.1 | 25 | 0.8155 | 0.6773 | 0.5610 | 0.4515 | 0.3048 | | 0.7063 | 0.21 | 50 | 0.4217 | 0.8504 | 0.8787 | 0.8591 | 0.5998 | | 0.5545 | 0.31 | 75 | 0.4104 | 0.8674 | 0.9084 | 0.8870 | 0.6198 | | 0.5285 | 0.41 | 100 | 0.3414 | 0.8648 | 0.9065 | 0.8852 | 0.6182 | | 0.4649 | 0.51 | 125 | 0.2983 | 0.8755 | 0.9172 | 0.8957 | 0.6257 | | 0.4305 | 0.62 | 150 | 0.3098 | 0.8763 | 0.9185 | 0.8969 | 0.6266 | | 0.3806 | 0.72 | 175 | 0.2825 | 0.8780 | 0.9204 | 0.8987 | 0.6280 | | 0.3606 | 0.82 | 200 | 0.2854 | 0.8804 | 0.9229 | 0.9012 | 0.6299 | | 0.3633 | 0.93 | 225 | 0.2515 | 0.9321 | 0.9286 | 0.9085 | 0.6509 | | 0.2473 | 1.03 | 250 | 0.2584 | 0.9277 | 0.9343 | 0.9265 | 0.7655 | | 0.3906 | 1.13 | 275 | 0.2194 | 0.9264 | 0.9318 | 0.9275 | 0.7923 | | 0.3153 | 1.23 | 300 | 0.2296 | 0.9427 | 0.9450 | 0.9408 | 0.8312 | | 0.2881 | 1.34 | 325 | 0.2365 | 0.9276 | 0.9311 | 0.9278 | 0.8088 | | 0.288 | 1.44 | 350 | 0.2472 | 0.9276 | 0.9299 | 0.9179 | 0.7305 | | 0.2728 | 1.54 | 375 | 0.2390 | 0.9202 | 0.9248 | 0.9208 | 0.7915 | | 0.2502 | 1.65 | 400 | 0.2188 | 0.9342 | 0.9375 | 0.9343 | 0.8172 | | 0.2351 | 1.75 | 425 | 0.2145 | 0.9368 | 0.9406 | 0.9376 | 0.8232 | ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
BigSalmon/InformalToFormalLincoln24
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: creativeml-openrail-m --- Please see https://civitai.com/models/21305/tenten-character-lohafullckpt #### Trigger words Character names and styles are separated by `;` - Algard - Algard shota - Anisphia - Anisphia loli - Euphyllia - Grantz - Illya - Lainie - Lumielle - Orfans - Sylphine - Tilty - aniscreen - fanart #### Dataset - 3936 anime screenshots - 202 fan arts - ~30K regualrization images
BigSalmon/InformalToFormalLincolnDistilledGPT2
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
BigSalmon/MrLincoln11
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Saifullah-S results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Saifullah-S This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0278 - Train Accuracy: 0.9943 - Validation Loss: 0.0061 - Validation Accuracy: 0.9987 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.0278 | 0.9943 | 0.0061 | 0.9987 | 0 | ### Framework versions - Transformers 4.27.1 - TensorFlow 2.11.0 - Tokenizers 0.13.2
BigSalmon/MrLincoln125MNeo
[ "pytorch", "tensorboard", "gpt_neo", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: mit tags: - generated_from_trainer metrics: - precision - recall model-index: - name: combine-60-vsfc-xlm-r results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # combine-60-vsfc-xlm-r This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2538 - Precision: 0.8786 - Recall: 0.9210 - F1 Weighted: 0.8993 - F1 Macro: 0.6284 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 Weighted | F1 Macro | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:-----------:|:--------:| | 1.12 | 0.09 | 25 | 1.0597 | 0.2586 | 0.5085 | 0.3429 | 0.2247 | | 0.9016 | 0.18 | 50 | 0.5441 | 0.8258 | 0.8642 | 0.8440 | 0.5895 | | 0.6163 | 0.27 | 75 | 0.4097 | 0.8713 | 0.9109 | 0.8897 | 0.6215 | | 0.4973 | 0.36 | 100 | 0.3429 | 0.8726 | 0.9135 | 0.8923 | 0.6234 | | 0.4666 | 0.46 | 125 | 0.3091 | 0.8774 | 0.9198 | 0.8981 | 0.6277 | | 0.4458 | 0.55 | 150 | 0.3671 | 0.8788 | 0.8888 | 0.8697 | 0.6153 | | 0.386 | 0.64 | 175 | 0.2554 | 0.8811 | 0.9229 | 0.9012 | 0.6297 | | 0.3975 | 0.73 | 200 | 0.2712 | 0.8834 | 0.9255 | 0.9037 | 0.6314 | | 0.3293 | 0.82 | 225 | 0.2538 | 0.8786 | 0.9210 | 0.8993 | 0.6284 | ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
BigSalmon/MrLincoln5
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 41.60 +/- 25.80 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
BigSalmon/MrLincoln7
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: taohoang/ppo-PyramidsTraining 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
BigSalmon/MrLincolnBerta
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: it datasets: - lmqg/qg_itquad pipeline_tag: text2text-generation tags: - question generation widget: - text: "<hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento." example_title: "Question Generation Example 1" - text: "L' individuazione del petrolio e lo sviluppo di nuovi giacimenti richiedeva in genere <hl> da cinque a dieci anni <hl> prima di una produzione significativa." example_title: "Question Generation Example 2" - text: "il <hl> Giappone <hl> è stato il paese più dipendente dal petrolio arabo." example_title: "Question Generation Example 3" model-index: - name: vocabtrimmer/mt5-small-trimmed-it-10000-itquad-qg results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_itquad type: default args: default metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 7.51 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 21.88 - name: METEOR (Question Generation) type: meteor_question_generation value: 17.78 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 81.15 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 57.1 --- # Model Card of `vocabtrimmer/mt5-small-trimmed-it-10000-itquad-qg` This model is fine-tuned version of [vocabtrimmer/mt5-small-trimmed-it-10000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-it-10000) for question generation task on the [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [vocabtrimmer/mt5-small-trimmed-it-10000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-it-10000) - **Language:** it - **Training data:** [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="it", model="vocabtrimmer/mt5-small-trimmed-it-10000-itquad-qg") # model prediction questions = model.generate_q(list_context="Dopo il 1971 , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.", list_answer="Dopo il 1971") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-it-10000-itquad-qg") output = pipe("<hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-it-10000-itquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_itquad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 81.15 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_1 | 22.96 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_2 | 15.06 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_3 | 10.47 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_4 | 7.51 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | METEOR | 17.78 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | MoverScore | 57.1 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | ROUGE_L | 21.88 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_itquad - dataset_name: default - input_types: paragraph_answer - output_types: question - prefix_types: None - model: vocabtrimmer/mt5-small-trimmed-it-10000 - max_length: 512 - max_length_output: 32 - epoch: 14 - batch: 16 - lr: 0.001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-it-10000-itquad-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
BigSalmon/Neo
[ "pytorch", "gpt_neo", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
2023-03-18T10:41:19Z
https://huggingface.co/kojima-r/wav2vec2-base-birddb-small ``` import librosa import torch from transformers import Wav2Vec2ForPreTraining,Wav2Vec2Processor sound_file = 'sample.wav' sound_data,_ = librosa.load(sound_file, sr=16000) model_id = "kojima-lab/Cd_kahakuDB" model = Wav2Vec2ForPreTraining.from_pretrained(model_id) result=model(torch.tensor([sound_data])) hidden_vecs=result.projected_states ``` ![PCA](https://huggingface.co/Nushigawa03/Cd_kahakuDB/resolve/main/pca_HL.png) ![UMAP](https://huggingface.co/Nushigawa03/Cd_kahakuDB/resolve/main/umap_HL.png) ![PCA](https://huggingface.co/Nushigawa03/Cd_kahakuDB/resolve/main/pca_region.png) ![UMAP](https://huggingface.co/Nushigawa03/Cd_kahakuDB/resolve/main/umap_region.png) H: red, L: green 1. 新潟県妙高高原 #ff0000, red 2. 埼玉県比企丘陵 #ff8000, orange 3. 伊豆諸島新島 #804000, brown 4. 伊豆諸島三宅島 #ffff00, yellow 5. 奄美諸島喜界島 #00ff00, green 6. 小笠原諸島母島 #008000, dark green 7. 茨城県つくば市 #00ffff, light blue 8. 小笠原諸島父島 #0000ff, blue 9. 伊豆諸島八丈島 #000080, dark blue 10. 南大東島 #8000ff, purple 11. 北海道網走周辺 #ff00ff, pink
BigSalmon/ParaphraseParentheses2.0
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: pixelcopter-unit4-lr98e-5 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 27.20 +/- 16.41 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
BigSalmon/Points
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
2023-03-18T10:53:24Z
--- license: mit tags: - generated_from_trainer datasets: - germanquad model-index: - name: gelectra-base-germanquad-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gelectra-base-germanquad-finetuned-squad This model is a fine-tuned version of [deepset/gelectra-base-germanquad](https://huggingface.co/deepset/gelectra-base-germanquad) on the germanquad dataset. It achieves the following results on the evaluation set: - Loss: 2.5011 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 180 | 1.7745 | | No log | 2.0 | 360 | 1.8828 | | 1.0639 | 3.0 | 540 | 1.9312 | | 1.0639 | 4.0 | 720 | 2.1928 | | 1.0639 | 5.0 | 900 | 2.2980 | | 0.6008 | 6.0 | 1080 | 2.3243 | | 0.6008 | 7.0 | 1260 | 2.3856 | | 0.6008 | 8.0 | 1440 | 2.4762 | | 0.4335 | 9.0 | 1620 | 2.4866 | | 0.4335 | 10.0 | 1800 | 2.5011 | ### Framework versions - Transformers 4.27.2 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
BigSalmon/Robertsy
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2023-03-18T10:59:22Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 274.50 +/- 31.50 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jackhhhh -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jackhhhh -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jackhhhh ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 100000.0), ('optimize_memory_usage', True), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
BigSalmon/T5F
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
6
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: aiartwork/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
BigSalmon/T5Salmon2
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
13
null
--- tags: - sentence-transformers - feature-extraction - transformers - sentence-similarity --- # pushpdeep/sberthindi This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('pushpdeep/sberthindi') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('pushpdeep/sberthindi') model = AutoModel.from_pretrained('pushpdeep/sberthindi') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=pushpdeep/sberthindi) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 15106 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MSELoss.MSELoss` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "eps": 1e-06, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
BigTooth/DialoGPT-small-tohru
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
A merged model designed to create TRUE ART. This model was originally published on CIVITAI on March 7, 2023. But unfortunately, it was soon attacked by fanatical mobs of ultra-left progressiveism, and forced the platform to delete the original page. It combines 64 excellent models published on CIVITAI that highly adapts my personal aesthetic taste. However original post address of some models in the recipe is now 404 pages, for reasons unclear to me. It was originally purely for private use, but due to strong requests of friends in some interest communities, I decided to release it publicly after all. This model is good at generating semi-realistic stylized art, but also works well for photorealistic display, showing a high degree of flexibility. As for the anime use, not been tested, and welcome to give it a try. Works amazingly in certain areas, always achieves what I want, and occasionally able to deliver unexpected but amazingly crafted work. A notable advantage is the astonishing light and shadow effects and fascinating atmosphere, but the disadvantage is that it is often difficult to control some deformed expressions of hands part. The following models are not included in the recipe for this MIX: anything series, abyssorange series, chillout series. It's a pity that I can't talk more about this model in the introduction, maybe things are different in a better world where everyone can really speak freely without any persecution, just like the golden 19th century.
Blabla/Pipipopo
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: pixelcopter-v1-v5-1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 29.10 +/- 20.45 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Blazeolmo/Scrabunzi
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-03-18T11:58:23Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 245.74 +/- 19.56 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
BlightZz/MakiseKurisu
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1204.39 +/- 304.19 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
BobBraico/bert-finetuned-ner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Perse90/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Brayan/CNN_Brain_Tumor
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: pixelcopter-v1-v5-2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 30.60 +/- 27.95 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Buntan/xlm-roberta-base-finetuned-marc-en
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-03-18T14:06:23Z
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: ru datasets: - lmqg/qg_ruquad pipeline_tag: text2text-generation tags: - question generation widget: - text: "Нелишним будет отметить, что, развивая это направление, Д. И. Менделеев, поначалу априорно выдвинув идею о температуре, при которой высота мениска будет нулевой, <hl> в мае 1860 года <hl> провёл серию опытов." example_title: "Question Generation Example 1" - text: "Однако, франкоязычный <hl> Квебек <hl> практически никогда не включается в состав Латинской Америки." example_title: "Question Generation Example 2" - text: "Классическим примером международного синдиката XX века была группа компаний <hl> Де Бирс <hl> , которая в 1980-е годы контролировала до 90 % мировой торговли алмазами." example_title: "Question Generation Example 3" model-index: - name: vocabtrimmer/mt5-small-trimmed-ru-30000-ruquad-qg results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_ruquad type: default args: default metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 18.44 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 33.96 - name: METEOR (Question Generation) type: meteor_question_generation value: 29.21 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 86.35 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 65.04 --- # Model Card of `vocabtrimmer/mt5-small-trimmed-ru-30000-ruquad-qg` This model is fine-tuned version of [ckpts/mt5-small-trimmed-ru-30000](https://huggingface.co/ckpts/mt5-small-trimmed-ru-30000) for question generation task on the [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [ckpts/mt5-small-trimmed-ru-30000](https://huggingface.co/ckpts/mt5-small-trimmed-ru-30000) - **Language:** ru - **Training data:** [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="ru", model="vocabtrimmer/mt5-small-trimmed-ru-30000-ruquad-qg") # model prediction questions = model.generate_q(list_context="Нелишним будет отметить, что, развивая это направление, Д. И. Менделеев, поначалу априорно выдвинув идею о температуре, при которой высота мениска будет нулевой, в мае 1860 года провёл серию опытов.", list_answer="в мае 1860 года") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-ru-30000-ruquad-qg") output = pipe("Нелишним будет отметить, что, развивая это направление, Д. И. Менделеев, поначалу априорно выдвинув идею о температуре, при которой высота мениска будет нулевой, <hl> в мае 1860 года <hl> провёл серию опытов.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ru-30000-ruquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_ruquad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 86.35 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | Bleu_1 | 34.2 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | Bleu_2 | 27.36 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | Bleu_3 | 22.33 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | Bleu_4 | 18.44 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | METEOR | 29.21 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | MoverScore | 65.04 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | ROUGE_L | 33.96 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_ruquad - dataset_name: default - input_types: paragraph_answer - output_types: question - prefix_types: None - model: ckpts/mt5-small-trimmed-ru-30000 - max_length: 512 - max_length_output: 32 - epoch: 13 - batch: 16 - lr: 0.001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ru-30000-ruquad-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16,451
null
--- tags: - CartPole-v1 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DQPN_freq results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 55.21 +/- 0.00 name: mean_reward verified: false --- # (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1** This is a trained model of a DQPN_freq agent playing CartPole-v1. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_100.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DQPN_freq_100]" python -m cleanrl_utils.enjoy --exp-name DQPN_freq_100 --env-id CartPole-v1 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_100-seed4/raw/main/dqpn_freq.py curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_100-seed4/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_100-seed4/raw/main/poetry.lock poetry install --all-extras python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_100 --policy-network-frequency 100 --seed 4 ``` # Hyperparameters ```python {'alg_type': 'dqpn_freq.py', 'batch_size': 256, 'buffer_size': 300000, 'capture_video': True, 'cuda': True, 'end_e': 0.1, 'env_id': 'CartPole-v1', 'exp_name': 'DQPN_freq_100', 'exploration_fraction': 0.2, 'gamma': 1.0, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 1000, 'policy_network_frequency': 100, 'policy_tau': 1.0, 'save_model': True, 'seed': 4, 'start_e': 1.0, 'target_network_frequency': 20, 'target_tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 500000, 'track': True, 'train_frequency': 1, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
null
--- tags: - CartPole-v1 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DQPN_freq results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 495.60 +/- 0.00 name: mean_reward verified: false --- # (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1** This is a trained model of a DQPN_freq agent playing CartPole-v1. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_150.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DQPN_freq_150]" python -m cleanrl_utils.enjoy --exp-name DQPN_freq_150 --env-id CartPole-v1 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_150-seed1/raw/main/dqpn_freq.py curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_150-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_150-seed1/raw/main/poetry.lock poetry install --all-extras python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_150 --policy-network-frequency 150 --seed 1 ``` # Hyperparameters ```python {'alg_type': 'dqpn_freq.py', 'batch_size': 256, 'buffer_size': 300000, 'capture_video': True, 'cuda': True, 'end_e': 0.1, 'env_id': 'CartPole-v1', 'exp_name': 'DQPN_freq_150', 'exploration_fraction': 0.2, 'gamma': 1.0, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 1000, 'policy_network_frequency': 150, 'policy_tau': 1.0, 'save_model': True, 'seed': 1, 'start_e': 1.0, 'target_network_frequency': 20, 'target_tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 500000, 'track': True, 'train_frequency': 1, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-glf
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
54
2023-03-18T14:11:16Z
--- tags: - CartPole-v1 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DQPN_freq results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 260.87 +/- 0.00 name: mean_reward verified: false --- # (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1** This is a trained model of a DQPN_freq agent playing CartPole-v1. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_150.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DQPN_freq_150]" python -m cleanrl_utils.enjoy --exp-name DQPN_freq_150 --env-id CartPole-v1 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_150-seed4/raw/main/dqpn_freq.py curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_150-seed4/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_150-seed4/raw/main/poetry.lock poetry install --all-extras python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_150 --policy-network-frequency 150 --seed 4 ``` # Hyperparameters ```python {'alg_type': 'dqpn_freq.py', 'batch_size': 256, 'buffer_size': 300000, 'capture_video': True, 'cuda': True, 'end_e': 0.1, 'env_id': 'CartPole-v1', 'exp_name': 'DQPN_freq_150', 'exploration_fraction': 0.2, 'gamma': 1.0, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 1000, 'policy_network_frequency': 150, 'policy_tau': 1.0, 'save_model': True, 'seed': 4, 'start_e': 1.0, 'target_network_frequency': 20, 'target_tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 500000, 'track': True, 'train_frequency': 1, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "has_space" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
19,850
null
--- tags: - CartPole-v1 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DQPN_freq results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 114.86 +/- 0.00 name: mean_reward verified: false --- # (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1** This is a trained model of a DQPN_freq agent playing CartPole-v1. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_150.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DQPN_freq_150]" python -m cleanrl_utils.enjoy --exp-name DQPN_freq_150 --env-id CartPole-v1 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_150-seed2/raw/main/dqpn_freq.py curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_150-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_150-seed2/raw/main/poetry.lock poetry install --all-extras python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_150 --policy-network-frequency 150 --seed 2 ``` # Hyperparameters ```python {'alg_type': 'dqpn_freq.py', 'batch_size': 256, 'buffer_size': 300000, 'capture_video': True, 'cuda': True, 'end_e': 0.1, 'env_id': 'CartPole-v1', 'exp_name': 'DQPN_freq_150', 'exploration_fraction': 0.2, 'gamma': 1.0, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 1000, 'policy_network_frequency': 150, 'policy_tau': 1.0, 'save_model': True, 'seed': 2, 'start_e': 1.0, 'target_network_frequency': 20, 'target_tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 500000, 'track': True, 'train_frequency': 1, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
CAMeL-Lab/bert-base-arabic-camelbert-da
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
449
null
--- tags: - CartPole-v1 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DQPN_freq results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 9.77 +/- 0.00 name: mean_reward verified: false --- # (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1** This is a trained model of a DQPN_freq agent playing CartPole-v1. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_30.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DQPN_freq_30]" python -m cleanrl_utils.enjoy --exp-name DQPN_freq_30 --env-id CartPole-v1 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_30-seed4/raw/main/dqpn_freq.py curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_30-seed4/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_30-seed4/raw/main/poetry.lock poetry install --all-extras python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_30 --policy-network-frequency 30 --seed 4 ``` # Hyperparameters ```python {'alg_type': 'dqpn_freq.py', 'batch_size': 256, 'buffer_size': 300000, 'capture_video': True, 'cuda': True, 'end_e': 0.1, 'env_id': 'CartPole-v1', 'exp_name': 'DQPN_freq_30', 'exploration_fraction': 0.2, 'gamma': 1.0, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 1000, 'policy_network_frequency': 30, 'policy_tau': 1.0, 'save_model': True, 'seed': 4, 'start_e': 1.0, 'target_network_frequency': 20, 'target_tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 500000, 'track': True, 'train_frequency': 1, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus26
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
45
null
--- tags: - CartPole-v1 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DQPN_freq results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1** This is a trained model of a DQPN_freq agent playing CartPole-v1. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_30.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DQPN_freq_30]" python -m cleanrl_utils.enjoy --exp-name DQPN_freq_30 --env-id CartPole-v1 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_30-seed2/raw/main/dqpn_freq.py curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_30-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_30-seed2/raw/main/poetry.lock poetry install --all-extras python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_30 --policy-network-frequency 30 --seed 2 ``` # Hyperparameters ```python {'alg_type': 'dqpn_freq.py', 'batch_size': 256, 'buffer_size': 300000, 'capture_video': True, 'cuda': True, 'end_e': 0.1, 'env_id': 'CartPole-v1', 'exp_name': 'DQPN_freq_30', 'exploration_fraction': 0.2, 'gamma': 1.0, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 1000, 'policy_network_frequency': 30, 'policy_tau': 1.0, 'save_model': True, 'seed': 2, 'start_e': 1.0, 'target_network_frequency': 20, 'target_tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 500000, 'track': True, 'train_frequency': 1, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus6
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
34
2023-03-18T14:12:42Z
--- tags: - CartPole-v1 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DQPN_freq results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 495.93 +/- 0.00 name: mean_reward verified: false --- # (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1** This is a trained model of a DQPN_freq agent playing CartPole-v1. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_30.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DQPN_freq_30]" python -m cleanrl_utils.enjoy --exp-name DQPN_freq_30 --env-id CartPole-v1 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_30-seed1/raw/main/dqpn_freq.py curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_30-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_30-seed1/raw/main/poetry.lock poetry install --all-extras python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_30 --policy-network-frequency 30 --seed 1 ``` # Hyperparameters ```python {'alg_type': 'dqpn_freq.py', 'batch_size': 256, 'buffer_size': 300000, 'capture_video': True, 'cuda': True, 'end_e': 0.1, 'env_id': 'CartPole-v1', 'exp_name': 'DQPN_freq_30', 'exploration_fraction': 0.2, 'gamma': 1.0, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 1000, 'policy_network_frequency': 30, 'policy_tau': 1.0, 'save_model': True, 'seed': 1, 'start_e': 1.0, 'target_network_frequency': 20, 'target_tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 500000, 'track': True, 'train_frequency': 1, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
63
null
--- tags: - CartPole-v1 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DQPN_freq results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 90.82 +/- 0.00 name: mean_reward verified: false --- # (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1** This is a trained model of a DQPN_freq agent playing CartPole-v1. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_freq_30.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DQPN_freq_30]" python -m cleanrl_utils.enjoy --exp-name DQPN_freq_30 --env-id CartPole-v1 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_30-seed3/raw/main/dqpn_freq.py curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_30-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/CartPole-v1-DQPN_freq_30-seed3/raw/main/poetry.lock poetry install --all-extras python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name DQPN_freq_30 --policy-network-frequency 30 --seed 3 ``` # Hyperparameters ```python {'alg_type': 'dqpn_freq.py', 'batch_size': 256, 'buffer_size': 300000, 'capture_video': True, 'cuda': True, 'end_e': 0.1, 'env_id': 'CartPole-v1', 'exp_name': 'DQPN_freq_30', 'exploration_fraction': 0.2, 'gamma': 1.0, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 1000, 'policy_network_frequency': 30, 'policy_tau': 1.0, 'save_model': True, 'seed': 3, 'start_e': 1.0, 'target_network_frequency': 20, 'target_tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 500000, 'track': True, 'train_frequency': 1, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
132
2023-03-18T14:32:24Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: reinforce-CartPole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,862
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # pushpdeep/sberthindiv2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('pushpdeep/sberthindiv2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('pushpdeep/sberthindiv2') model = AutoModel.from_pretrained('pushpdeep/sberthindiv2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=pushpdeep/sberthindiv2) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 15106 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MSELoss.MSELoss` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "eps": 1e-06, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
855
null
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for VGG19 VGG19BN Model from torchvision.models Used in 'aiornot' classification competition on HuggingFace. Performs binary classification to detect whether the image is AI-generated or not. ## Model Details ### How to Get Started with the Model Use the code below to get started with the model. ckpt = torch.load('weights.ckpt', map_location=device) weights = ckpt['model'] model = torchvision.models.vgg19_bn().to(device) model.load_state_dict(weights)
CAMeL-Lab/bert-base-arabic-camelbert-mix
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "Arabic", "Dialect", "Egyptian", "Gulf", "Levantine", "Classical Arabic", "MSA", "Modern Standard Arabic", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20,880
null
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 9.20 +/- 4.82 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r Feldi/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
CAMeL-Lab/bert-base-arabic-camelbert-msa-half
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
null
you guys also could use "Sikinx#6557" remake for this locon, which is also the V2 of this locon, using same DATASET as mine. https://civitai.com/models/21878/ufotable-fate-series-style-loralycoris keywords "anime screencap","ufotable screencap" sink version: ![00001-anything-v4.5-pruned_323620673.png](https://s3.amazonaws.com/moonup/production/uploads/1679391831751-639b1d7a5d6e30f96220eb7c.png) ![00000-anything-v4.5-pruned_4279840288.png](https://s3.amazonaws.com/moonup/production/uploads/1679391831788-639b1d7a5d6e30f96220eb7c.png) my version ![00003-anything-v4.5-pruned_1012004334.png](https://s3.amazonaws.com/moonup/production/uploads/1679391869582-639b1d7a5d6e30f96220eb7c.png) ![00002-anything-v4.5-pruned_1855246878.png](https://s3.amazonaws.com/moonup/production/uploads/1679391869624-639b1d7a5d6e30f96220eb7c.png) the colorful one is his version, and relaxed one is mine. ![00010-anything-v4.5-pruned_371702794.png](https://s3.amazonaws.com/moonup/production/uploads/1679391917019-639b1d7a5d6e30f96220eb7c.png) ![00009-anything-v4.5-pruned_1252778138.png](https://s3.amazonaws.com/moonup/production/uploads/1679391917017-639b1d7a5d6e30f96220eb7c.png)
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-egy
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
52
null
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Find your model_id: beebeckzzz/SnowballTarget-v1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
21
null
--- license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - language-and-voice-lab/samromur_children metrics: - wer model-index: - name: Whisper tiny Icelandic Children results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: language-and-voice-lab/samromur_children type: language-and-voice-lab/samromur_children config: samromur_children split: validation args: samromur_children metrics: - name: Wer type: wer value: 45.29159303206261 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper tiny Icelandic Children This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the language-and-voice-lab/samromur_children dataset. It achieves the following results on the evaluation set: - Loss: 0.6313 - Wer: 45.2916 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 1.4343 | 0.25 | 250 | 1.4900 | 77.9349 | | 0.7722 | 0.5 | 500 | 0.8731 | 60.2878 | | 0.7181 | 0.75 | 750 | 0.6534 | 46.5034 | | 0.5733 | 1.0 | 1000 | 0.6313 | 45.2916 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1 - Datasets 2.10.0 - Tokenizers 0.13.2
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
133
2023-03-18T14:59:20Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 12.52 +/- 6.63 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r Feldi/rl_course_vizdoom_health_gathering_supreme-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- tags: - stable_diffusion - checkpoint --- The source of the models is listed below. Please check the original licenses from the source. https://civitai.com/models/6925
CAMeL-Lab/bert-base-arabic-camelbert-msa
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2,967
null
--- license: apache-2.0 datasets: - samsum language: - en library_name: transformers tags: - peft - lora - t5 - flan metrics: - rouge model-index: - name: flan-t5-xxl-samsum-peft results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: samsum type: samsum config: samsum split: train args: samsum metrics: - name: Rouge1 type: rouge value: 50.386161 --- # FLAN-T5-XXL LoRA fine-tuned on `samsum` PEFT tuned FLAN-T5 XXL model. # flan-t5-base-samsum This model is a fine-tuned version of [philschmid/flan-t5-xxl-sharded-fp16](https://huggingface.co/philschmid/flan-t5-xxl-sharded-fp16) on the samsum dataset. It achieves the following results on the evaluation set: - rogue1: 50.386161% - rouge2: 24.842412% - rougeL: 41.370130% - rougeLsum: 41.394230% - ## How to use the model The model was trained using 🤗 [PEFT](https://github.com/huggingface/peft). This repository only contains the fine-tuned adapter weights for LoRA and the configuration to load the model. Below you can find a snippet on how to run inference using the model. This will load the FLAN-T5-XXL from hugging face if not existing locally. 1. load the model ```python import torch from peft import PeftModel, PeftConfig from transformers import AutoModelForSeq2SeqLM, AutoTokenizer # Load peft config for pre-trained checkpoint etc. peft_model_id = "philschmid/flan-t5-xxl-samsum-peft" config = PeftConfig.from_pretrained(peft_model_id) # load base LLM model and tokenizer model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path, load_in_8bit=True, device_map={"":0}) tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) # Load the Lora model model = PeftModel.from_pretrained(model, peft_model_id, device_map={"":0}) model.eval() ``` 2. generate ```python text = "test" input_ids = tokenizer(text, return_tensors="pt", truncation=True).input_ids.cuda() outputs = model.generate(input_ids=input_ids, max_new_tokens=10, do_sample=True, top_p=0.9) print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]) ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-3 - train_batch_size: auto - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu117 - Datasets 2.9.1 - PEFT@main
CAUKiel/JavaBERT-uncased
[ "pytorch", "safetensors", "bert", "fill-mask", "java", "code", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 786.00 +/- 222.23 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Ellipsoul -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Ellipsoul -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Ellipsoul ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
CL/safe-math-bot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m tags: - coreml - stable-diffusion - text-to-image - not-for-all-eyes --- # Core ML Converted Model: - This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).<br> - Provide the model to an app such as Mochi Diffusion [Github](https://github.com/godly-devotion/MochiDiffusion) - [Discord](https://discord.gg/x2kartzxGv) to generate images.<br> - `split_einsum` version is compatible with all compute unit options including Neural Engine.<br> - `original` version is only compatible with CPU & GPU option.<br> - Custom resolution versions are tagged accordingly.<br> - `vae` tagged files have a vae embedded into the model.<br> - Descriptions are posted as-is from original model source. Not all features and/or results may be available in CoreML format.<br> - This model was converted with `vae-encoder` for i2i. - Models that are 32 bit will have "fp32" in the filename. # Note: Some models do not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml). # iCoMix: Source(s): [CivitAI](https://civitai.com/models/16164/icomix) Comic style Mix! Thank you for all Reviews, Great Model/Lora Creator, and Prompt Crafter!!! About this version Merge with this mix model: MODEL : MixFishMix RPG V4 YuzuLemonMilk joMadDiffusion LORA : Comic Diffusion LORA
CLAck/en-vi
[ "pytorch", "marian", "text2text-generation", "en", "vi", "dataset:ALT", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2023-03-18T15:21:46Z
--- license: apache-2.0 tags: - Image-Classification - generated_from_trainer datasets: - beans metrics: - accuracy widget: - src: >- https://huggingface.co/platzi/platzi-vit-base-beans/resolve/main/healthy.jpeg example_title: Healthy - src: >- https://huggingface.co/platzi/platzi-vit-base-beans/resolve/main/bean_rust.jpeg example_title: Bean Rust model-index: - name: platzi-vit-model-Joaquin-Romero results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-vit-model-Joaquin-Romero This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the datasetX dataset. It achieves the following results on the evaluation set: - Loss: 0.0613 - Accuracy: 0.9850 ## Model description It's a Image Classification model performed ## Intended uses & limitations None ## Training and evaluation data Beans dataset ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1475 | 3.85 | 500 | 0.0613 | 0.9850 | ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
CLAck/indo-mixed
[ "pytorch", "marian", "text2text-generation", "en", "id", "dataset:ALT", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Prgrg/ja-en-JESC-v5.0-wth-lr-decay results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Prgrg/ja-en-JESC-v5.0-wth-lr-decay This model is a fine-tuned version of [Prgrg/ja-en-JESC-v4.0-wth-lr-decay](https://huggingface.co/Prgrg/ja-en-JESC-v4.0-wth-lr-decay) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.0104 - Validation Loss: 2.5781 - Epoch: 5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0001, 'decay_steps': 150000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.001} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.1973 | 2.5649 | 0 | | 2.1604 | 2.5628 | 1 | | 2.1194 | 2.5616 | 2 | | 2.0802 | 2.5601 | 3 | | 2.0434 | 2.5694 | 4 | | 2.0104 | 2.5781 | 5 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.11.0 - Datasets 2.10.1 - Tokenizers 0.13.2
CLEE/CLEE
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### dexechii's-style Dreambooth model trained by Anonim3327 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Model base: Lawlas's Yiffymix 2.0 (furry model) (!!!VAE is not needed!!!) Images were taken here:https://twitter.com/dexechii don't forget to send in your results P.S. I opened a discord server where you can offer your ideas for models, link:https://discord.gg/HDfvejBPMJ Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) ### Sample pictures of this concept: ![0](https://huggingface.co/Anonim3327/dexechii-s-style/resolve/main/sample_images/00021-3117519479.png) ![1](https://huggingface.co/Anonim3327/dexechii-s-style/resolve/main/sample_images/00014-3487930816.png) ![2](https://huggingface.co/Anonim3327/dexechii-s-style/resolve/main/sample_images/00018-3412975122.png) ![3](https://huggingface.co/Anonim3327/dexechii-s-style/resolve/main/sample_images/00016-925491525.png) ![4](https://huggingface.co/Anonim3327/dexechii-s-style/resolve/main/sample_images/00019-2820150710.png) ![5](https://huggingface.co/Anonim3327/dexechii-s-style/resolve/main/sample_images/00022-1178791034.png) ![6](https://huggingface.co/Anonim3327/dexechii-s-style/resolve/main/sample_images/00020-1695171271.png) ![7](https://huggingface.co/Anonim3327/dexechii-s-style/resolve/main/sample_images/00015-1588195780.png)
CLTL/MedRoBERTa.nl
[ "pytorch", "roberta", "fill-mask", "nl", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2,988
null
--- license: mit tags: - generated_from_trainer metrics: - precision - recall model-index: - name: uit-vsfc-xlm-r results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # uit-vsfc-xlm-r This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2435 - Precision: 0.9352 - Recall: 0.9381 - F1 Weighted: 0.9359 - F1 Macro: 0.8355 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 Weighted | F1 Macro | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:-----------:|:--------:| | 0.9917 | 0.14 | 25 | 0.8298 | 0.7568 | 0.7473 | 0.7280 | 0.5092 | | 0.6922 | 0.28 | 50 | 0.4733 | 0.8513 | 0.8901 | 0.8694 | 0.6071 | | 0.3939 | 0.42 | 75 | 0.3388 | 0.8695 | 0.9116 | 0.8900 | 0.6217 | | 0.341 | 0.56 | 100 | 0.3277 | 0.8705 | 0.9059 | 0.8854 | 0.6180 | | 0.25 | 0.7 | 125 | 0.2472 | 0.8803 | 0.9223 | 0.9007 | 0.6290 | | 0.3081 | 0.84 | 150 | 0.2520 | 0.8798 | 0.9185 | 0.8974 | 0.6265 | | 0.2782 | 0.98 | 175 | 0.2321 | 0.8823 | 0.9223 | 0.9010 | 0.6290 | | 0.2496 | 1.12 | 200 | 0.2476 | 0.9187 | 0.9191 | 0.9177 | 0.7688 | | 0.2651 | 1.26 | 225 | 0.2530 | 0.9251 | 0.9305 | 0.9258 | 0.7883 | | 0.2531 | 1.4 | 250 | 0.2207 | 0.9337 | 0.9381 | 0.9339 | 0.8097 | | 0.2863 | 1.54 | 275 | 0.2243 | 0.9243 | 0.9280 | 0.9259 | 0.7783 | | 0.2469 | 1.68 | 300 | 0.2437 | 0.9350 | 0.9381 | 0.9314 | 0.7926 | | 0.2218 | 1.82 | 325 | 0.2663 | 0.9243 | 0.9261 | 0.9240 | 0.8137 | | 0.2299 | 1.96 | 350 | 0.2365 | 0.9368 | 0.9400 | 0.9377 | 0.8321 | | 0.1664 | 2.09 | 375 | 0.2295 | 0.9375 | 0.9413 | 0.9377 | 0.8238 | | 0.1682 | 2.23 | 400 | 0.2213 | 0.9386 | 0.9400 | 0.9392 | 0.8381 | | 0.1803 | 2.37 | 425 | 0.2207 | 0.9347 | 0.9368 | 0.9355 | 0.8320 | | 0.1509 | 2.51 | 450 | 0.2197 | 0.9388 | 0.9419 | 0.9371 | 0.8135 | | 0.1338 | 2.65 | 475 | 0.2480 | 0.9349 | 0.9375 | 0.9303 | 0.7886 | | 0.1525 | 2.79 | 500 | 0.2291 | 0.9405 | 0.9438 | 0.9410 | 0.8352 | | 0.1427 | 2.93 | 525 | 0.2435 | 0.9352 | 0.9381 | 0.9359 | 0.8355 | ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
CLTL/gm-ner-xlmrbase
[ "pytorch", "tf", "xlm-roberta", "token-classification", "nl", "transformers", "dighum", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "XLMRobertaForTokenClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
CLTL/icf-levels-att
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
2023-03-18T15:51:27Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -145.61 +/- 95.81 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'yangwj2011/ppo-LunarLander-v2-unit8' 'batch_size': 512 'minibatch_size': 128} ```
CLTL/icf-levels-ber
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
null
Access to model korp123/looker is restricted and you are not in the authorized list. Visit https://huggingface.co/korp123/looker to ask for access.
CLTL/icf-levels-enr
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- license: mit tags: - generated_from_trainer metrics: - precision - recall model-index: - name: synthetic-vsfc-xlm-r results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # synthetic-vsfc-xlm-r This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5363 - Precision: 0.8271 - Recall: 0.4694 - F1 Weighted: 0.5676 - F1 Macro: 0.4308 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 Weighted | F1 Macro | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:-----------:|:--------:| | 1.1067 | 0.16 | 25 | 1.0890 | 0.4082 | 0.4409 | 0.4087 | 0.2919 | | 0.8313 | 0.31 | 50 | 1.0063 | 0.8055 | 0.4536 | 0.5326 | 0.4108 | | 0.5543 | 0.47 | 75 | 1.2653 | 0.7821 | 0.5130 | 0.5959 | 0.4533 | | 0.561 | 0.62 | 100 | 1.2028 | 0.8212 | 0.6001 | 0.6753 | 0.5171 | | 0.5141 | 0.78 | 125 | 1.3416 | 0.8536 | 0.4896 | 0.5820 | 0.4459 | | 0.4595 | 0.94 | 150 | 1.2413 | 0.8543 | 0.6349 | 0.7147 | 0.5427 | | 0.4647 | 1.09 | 175 | 1.4473 | 0.8470 | 0.5894 | 0.6803 | 0.5160 | | 0.4351 | 1.25 | 200 | 1.2805 | 0.8418 | 0.6267 | 0.7058 | 0.5377 | | 0.4012 | 1.41 | 225 | 1.6780 | 0.8222 | 0.4176 | 0.5164 | 0.3948 | | 0.3711 | 1.56 | 250 | 1.5256 | 0.8483 | 0.5186 | 0.6145 | 0.4644 | | 0.4064 | 1.72 | 275 | 1.6555 | 0.8420 | 0.4593 | 0.5710 | 0.4323 | | 0.3905 | 1.88 | 300 | 1.5819 | 0.8442 | 0.4024 | 0.4908 | 0.3764 | | 0.3487 | 2.03 | 325 | 1.5363 | 0.8271 | 0.4694 | 0.5676 | 0.4308 | ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
CLTL/icf-levels-mbw
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Find your model_id: M331/pp0-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
CM-CA/DialoGPT-small-cartman
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 11.86 +/- 5.29 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r yangwj2011/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
CNT-UPenn/Bio_ClinicalBERT_for_seizureFreedom_classification
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: mit tags: - generated_from_trainer metrics: - precision - recall model-index: - name: synthetic-10-vsfc-xlm-r results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # synthetic-10-vsfc-xlm-r This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3908 - Precision: 0.8874 - Recall: 0.5332 - F1 Weighted: 0.6428 - F1 Macro: 0.4878 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 Weighted | F1 Macro | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:-----------:|:--------:| | 1.1043 | 1.56 | 25 | 1.1570 | 0.1740 | 0.0726 | 0.0556 | 0.0633 | | 0.9299 | 3.12 | 50 | 1.1444 | 0.6971 | 0.3696 | 0.4411 | 0.3400 | | 0.6689 | 4.69 | 75 | 1.3908 | 0.8874 | 0.5332 | 0.6428 | 0.4878 | ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
CNT-UPenn/RoBERTa_for_seizureFrequency_QA
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: mit tags: - generated_from_trainer metrics: - precision - recall model-index: - name: synthetic-20-vsfc-xlm-r results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # synthetic-20-vsfc-xlm-r This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9792 - Precision: 0.8762 - Recall: 0.4403 - F1 Weighted: 0.5325 - F1 Macro: 0.4085 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 Weighted | F1 Macro | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:-----------:|:--------:| | 1.1229 | 0.78 | 25 | 1.1125 | 0.2741 | 0.5085 | 0.3441 | 0.2335 | | 0.8029 | 1.56 | 50 | 0.7848 | 0.8064 | 0.7334 | 0.7660 | 0.5695 | | 0.5565 | 2.34 | 75 | 0.9492 | 0.8333 | 0.5546 | 0.6380 | 0.4791 | | 0.3928 | 3.12 | 100 | 1.4848 | 0.8262 | 0.5212 | 0.6141 | 0.4681 | | 0.3988 | 3.91 | 125 | 1.6936 | 0.8733 | 0.4226 | 0.4931 | 0.3806 | | 0.3452 | 4.69 | 150 | 1.9792 | 0.8762 | 0.4403 | 0.5325 | 0.4085 | ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
CSZay/bart
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer metrics: - precision - recall model-index: - name: synthetic-30-vsfc-xlm-r results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # synthetic-30-vsfc-xlm-r This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5151 - Precision: 0.8404 - Recall: 0.4934 - F1 Weighted: 0.5903 - F1 Macro: 0.4491 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 Weighted | F1 Macro | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:-----------:|:--------:| | 1.0948 | 0.52 | 25 | 1.0688 | 0.8549 | 0.3936 | 0.4525 | 0.3417 | | 0.8944 | 1.04 | 50 | 1.0322 | 0.8312 | 0.8004 | 0.8097 | 0.6402 | | 0.5958 | 1.56 | 75 | 1.2383 | 0.8377 | 0.5938 | 0.6785 | 0.5159 | | 0.4904 | 2.08 | 100 | 1.2349 | 0.8497 | 0.6715 | 0.7400 | 0.5643 | | 0.499 | 2.6 | 125 | 1.0563 | 0.8417 | 0.7119 | 0.7629 | 0.5814 | | 0.4567 | 3.12 | 150 | 1.3069 | 0.8000 | 0.5205 | 0.6048 | 0.4573 | | 0.4061 | 3.65 | 175 | 1.4464 | 0.8399 | 0.4422 | 0.5289 | 0.4034 | | 0.3652 | 4.17 | 200 | 1.4604 | 0.8393 | 0.5016 | 0.6091 | 0.4594 | | 0.3696 | 4.69 | 225 | 1.5151 | 0.8404 | 0.4934 | 0.5903 | 0.4491 | ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
CTBC/ATS
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: lalina/my_awesome_eli5_mlm_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # lalina/my_awesome_eli5_mlm_model This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0147 - Validation Loss: 0.0128 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.0307 | 0.0145 | 0 | | 0.0236 | 0.0110 | 1 | | 0.0147 | 0.0128 | 2 | ### Framework versions - Transformers 4.27.1 - TensorFlow 2.11.0 - Datasets 2.10.1 - Tokenizers 0.13.2
Caddy/UD
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-03-18T16:34:25Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 16.60 +/- 15.64 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Calamarii/calamari
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer metrics: - precision - recall model-index: - name: synthetic-40-vsfc-xlm-r results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # synthetic-40-vsfc-xlm-r This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3921 - Precision: 0.8929 - Recall: 0.4782 - F1 Weighted: 0.5759 - F1 Macro: 0.4410 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 Weighted | F1 Macro | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:-----------:|:--------:| | 1.0989 | 0.39 | 25 | 1.0971 | 0.6042 | 0.3430 | 0.3150 | 0.2438 | | 0.8219 | 0.78 | 50 | 0.8525 | 0.7859 | 0.7682 | 0.7577 | 0.5881 | | 0.5421 | 1.17 | 75 | 1.2589 | 0.8433 | 0.5047 | 0.6097 | 0.4640 | | 0.456 | 1.56 | 100 | 1.2742 | 0.8534 | 0.5111 | 0.6086 | 0.4656 | | 0.4956 | 1.95 | 125 | 1.4465 | 0.8210 | 0.4195 | 0.4998 | 0.3903 | | 0.3392 | 2.34 | 150 | 1.1332 | 0.8779 | 0.5111 | 0.6126 | 0.4699 | | 0.3776 | 2.73 | 175 | 1.3973 | 0.8858 | 0.5199 | 0.6248 | 0.4729 | | 0.3287 | 3.12 | 200 | 1.4185 | 0.8428 | 0.5022 | 0.5984 | 0.4556 | | 0.3325 | 3.52 | 225 | 1.3921 | 0.8929 | 0.4782 | 0.5759 | 0.4410 | ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
Callidior/bert2bert-base-arxiv-titlegen
[ "pytorch", "safetensors", "encoder-decoder", "text2text-generation", "en", "dataset:arxiv_dataset", "transformers", "summarization", "license:apache-2.0", "autotrain_compatible", "has_space" ]
summarization
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
145
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: yawnznn234/my_awesome_eli5_mlm_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # yawnznn234/my_awesome_eli5_mlm_model This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0170 - Validation Loss: 0.0195 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.0845 | 0.0765 | 0 | | 0.0279 | 0.0144 | 1 | | 0.0170 | 0.0195 | 2 | ### Framework versions - Transformers 4.27.1 - TensorFlow 2.11.0 - Datasets 2.10.1 - Tokenizers 0.13.2
CallumRai/HansardGPT2
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Perse90/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
CalvinHuang/mt5-small-finetuned-amazon-en-es
[ "pytorch", "tensorboard", "mt5", "text2text-generation", "transformers", "summarization", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
summarization
{ "architectures": [ "MT5ForConditionalGeneration" ], "model_type": "mt5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
null
--- tags: - CartPole-v1 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 221.00 +/- 81.18 name: mean_reward verified: false --- # PPO Agent Playing CartPole-v1 This is a trained model of a PPO agent playing CartPole-v1. # Hyperparameters ```python {'exp_name': 'unit8' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'CartPole-v1' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'ammr/ppo-CartPole-v1' 'batch_size': 512 'minibatch_size': 128} ```
Cameron/BERT-SBIC-offensive
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: owendoar/my_awesome_eli5_mlm_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # owendoar/my_awesome_eli5_mlm_model This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0256 - Validation Loss: 0.0183 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.0955 | 0.0165 | 0 | | 0.0163 | 0.0112 | 1 | | 0.0256 | 0.0183 | 2 | ### Framework versions - Transformers 4.27.1 - TensorFlow 2.11.0 - Datasets 2.10.1 - Tokenizers 0.13.2
Cameron/BERT-eec-emotion
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
36
null
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Find your model_id: britojr/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Cameron/BERT-jigsaw-identityhate
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
37
null
--- library_name: keras datasets: - nsanghi/sample-datasets tags: - nature - keras-dreamboot --- ## Model description This is a Stable Diffusion model fine-tuned with Keras using Dreambooth on white breasted nuthatch ## Intended uses & limitations This model is trained for Dreambooth tutorial. ## Training and evaluation data This model is trained on dog images in [this dataset](https://huggingface.co/datasets/nsanghi/sample-datasets). ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | inner_optimizer.class_name | Custom>RMSprop | | inner_optimizer.config.name | RMSprop | | inner_optimizer.config.weight_decay | None | | inner_optimizer.config.clipnorm | None | | inner_optimizer.config.global_clipnorm | None | | inner_optimizer.config.clipvalue | None | | inner_optimizer.config.use_ema | False | | inner_optimizer.config.ema_momentum | 0.99 | | inner_optimizer.config.ema_overwrite_frequency | 100 | | inner_optimizer.config.jit_compile | True | | inner_optimizer.config.is_legacy_optimizer | False | | inner_optimizer.config.learning_rate | 0.0010000000474974513 | | inner_optimizer.config.rho | 0.9 | | inner_optimizer.config.momentum | 0.0 | | inner_optimizer.config.epsilon | 1e-07 | | inner_optimizer.config.centered | False | | dynamic | True | | initial_scale | 32768.0 | | dynamic_growth_steps | 2000 | | training_precision | mixed_float16 |
Cameron/BERT-mdgender-convai-binary
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
null
--- license: mit tags: - generated_from_trainer metrics: - precision - recall model-index: - name: synthetic-50-vsfc-xlm-r results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # synthetic-50-vsfc-xlm-r This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4091 - Precision: 0.8556 - Recall: 0.5243 - F1 Weighted: 0.6121 - F1 Macro: 0.4655 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 Weighted | F1 Macro | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:-----------:|:--------:| | 1.0942 | 0.31 | 25 | 1.0558 | 0.8673 | 0.3816 | 0.3914 | 0.2948 | | 0.9353 | 0.62 | 50 | 1.0523 | 0.8262 | 0.7448 | 0.7727 | 0.5946 | | 0.6801 | 0.94 | 75 | 1.5973 | 0.8845 | 0.3664 | 0.4515 | 0.3550 | | 0.5368 | 1.25 | 100 | 1.2531 | 0.8369 | 0.5755 | 0.6656 | 0.5041 | | 0.4458 | 1.56 | 125 | 1.4147 | 0.8289 | 0.4485 | 0.5348 | 0.4102 | | 0.4611 | 1.88 | 150 | 1.4477 | 0.8686 | 0.4649 | 0.5568 | 0.4276 | | 0.3957 | 2.19 | 175 | 1.7033 | 0.8750 | 0.4637 | 0.5633 | 0.4317 | | 0.4183 | 2.5 | 200 | 1.6080 | 0.8555 | 0.4479 | 0.5540 | 0.4217 | | 0.3482 | 2.81 | 225 | 1.4091 | 0.8556 | 0.5243 | 0.6121 | 0.4655 | ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
Cameron/BERT-mdgender-wizard
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: M331/pp0-PyramidsRND 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Cameron/BERT-rtgender-opgender-annotations
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
null
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: britojr/ppo-PyramidsTraining 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Camzure/MaamiBot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 datasets: - competitions/aiornot language: - en metrics: - accuracy tags: - generative ai - classification --- Classification model used to classify real images and AI generated images.\ The model used is swin-tiny-patch4-window7-224 finetued on aiornot dataset.\ To use the model ``` import torch from transformers import AutoFeatureExtractor, AutoModelForImageClassification labels = ["Real", "AI"] feature_extractor = AutoFeatureExtractor.from_pretrained("Nahrawy/AIorNot") model = AutoModelForImageClassification.from_pretrained("Nahrawy/AIorNot") input = feature_extractor(image, return_tensors="pt") with torch.no_grad(): outputs = model(**input) logits = outputs.logits prediction = logits.argmax(-1).item() label = labels[prediction] ```
Canadiancaleb/DialoGPT-small-walter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
2023-03-18T16:57:38Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall model-index: - name: synthetic-60-vsfc-xlm-r results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # synthetic-60-vsfc-xlm-r This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9000 - Precision: 0.8724 - Recall: 0.4738 - F1 Weighted: 0.5881 - F1 Macro: 0.4471 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 Weighted | F1 Macro | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:-----------:|:--------:| | 1.1177 | 0.26 | 25 | 1.0206 | 0.6156 | 0.6273 | 0.5871 | 0.4051 | | 0.775 | 0.52 | 50 | 1.0222 | 0.8723 | 0.4611 | 0.5609 | 0.4297 | | 0.603 | 0.78 | 75 | 0.9684 | 0.8147 | 0.7037 | 0.7494 | 0.5713 | | 0.5153 | 1.04 | 100 | 1.1462 | 0.8824 | 0.5313 | 0.6387 | 0.4904 | | 0.4893 | 1.3 | 125 | 1.6944 | 0.8437 | 0.4163 | 0.5168 | 0.3994 | | 0.4323 | 1.56 | 150 | 1.6921 | 0.8648 | 0.4460 | 0.5312 | 0.4104 | | 0.4281 | 1.82 | 175 | 1.3086 | 0.8351 | 0.5161 | 0.6038 | 0.4573 | | 0.4343 | 2.08 | 200 | 1.4962 | 0.8800 | 0.4870 | 0.5859 | 0.4480 | | 0.338 | 2.34 | 225 | 1.9000 | 0.8724 | 0.4738 | 0.5881 | 0.4471 | ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
Canadiancaleb/jessebot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-03-18T16:59:27Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -155.26 +/- 78.99 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'unit8' 'seed': 24 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 1000000 'learning_rate': 0.0001 'num_envs': 16 'num_steps': 1024 'anneal_lr': True 'gae': True 'gamma': 0.999 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'ammr/ppo-LunarLander-v2' 'batch_size': 16384 'minibatch_size': 4096} ```
Canyonevo/DialoGPT-medium-KingHenry
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: beebeckzzz/Pyramids-v1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Capreolus/birch-bert-large-msmarco_mb
[ "pytorch", "tf", "jax", "bert", "next-sentence-prediction", "transformers" ]
null
{ "architectures": [ "BertForNextSentencePrediction" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- license: apache-2.0 tags: - llama - lora - alpaca - 13b --- # Alpaca-LoRA 13B This repo contains a Low-Rank Adapter (LoRA) for LLaMA 13B fit on the [cleaned](https://github.com/tloen/alpaca-lora/commit/f7044049ab7916d38cbc84d16b323a8ecb1c6b58#diff-98320406f0dfb2be23a93a52b2532d898e486f1241d76dcd54a5d031ccb756fc) [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) dataset. This repo does not contain the LLaMA weights, but instead the tuned LoRA weights which can then be applied to existing LLaMA 13B weights. Instructions for running it can be found at [https://github.com/tloen/alpaca-lora](https://github.com/tloen/alpaca-lora).
Captain-1337/CrudeBERT
[ "pytorch", "bert", "text-classification", "arxiv:1908.10063", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: vit-base-aiornot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-aiornot This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
Carlork314/Carlos
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - stable_diffusion - lora --- The source of the models is listed below. Please check the original licenses from the source. https://civitai.com/models/16602
Carlork314/Xd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 pipeline_tag: question-answering tags: - question-answering - transformers - generated_from_trainer datasets: - squad_v2 - LLukas22/nq-simplified language: - en --- # all-MiniLM-L12-v2-qa-en This model is an extractive qa model. It's a fine-tuned version of [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) on the following datasets: [squad_v2](https://huggingface.co/datasets/squad_v2), [LLukas22/nq-simplified](https://huggingface.co/datasets/LLukas22/nq-simplified). ## Usage You can use the model like this: ```python from transformers import pipeline #Make predictions model_name = "LLukas22/all-MiniLM-L12-v2-qa-en" nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { "question": "What's my name?", "context": "My name is Clara and I live in Berkeley." } result = nlp(QA_input) print(result) ``` Alternatively you can load the model and tokenizer on their own: ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer #Make predictions model_name = "LLukas22/all-MiniLM-L12-v2-qa-en" model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2E-05 - per device batch size: 60 - effective batch size: 180 - seed: 42 - optimizer: AdamW with betas (0.9,0.999) and eps 1E-08 - weight decay: 1E-02 - D-Adaptation: False - Warmup: False - number of epochs: 10 - mixed_precision_training: bf16 ## Training results | Epoch | Train Loss | Validation Loss | | ----- | ---------- | --------------- | | 0 | 2.65 | 1.88 | | 1 | 1.83 | 1.74 | | 2 | 1.69 | 1.69 | | 3 | 1.63 | 1.68 | | 4 | 1.6 | 1.67 | | 5 | 1.58 | 1.66 | | 6 | 1.57 | 1.66 | | 7 | 1.57 | 1.66 | ## Evaluation results | Epoch | f1 | exact_match | | ----- | ----- | ----- | | 0 | 0.507 | 0.378 | | 1 | 0.53 | 0.418 | | 2 | 0.544 | 0.431 | | 3 | 0.552 | 0.429 | | 4 | 0.557 | 0.439 | | 5 | 0.561 | 0.438 | | 6 | 0.564 | 0.441 | | 7 | 0.566 | 0.441 | ## Framework versions - Transformers: 4.25.1 - PyTorch: 2.0.0+cu118 - PyTorch Lightning: 1.8.6 - Datasets: 2.7.1 - Tokenizers: 0.13.1 - Sentence Transformers: 2.2.2 ## Additional Information This model was trained as part of my Master's Thesis **'Evaluation of transformer based language models for use in service information systems'**. The source code is available on [Github](https://github.com/LLukas22/Retrieval-Augmented-QA).
Carolhuehuehuehue/Sla
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - stable_diffusion - lora --- The source of the models is listed below. Please check the original licenses from the source. https://civitai.com/models/12324
CasualHomie/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: es datasets: - lmqg/qg_esquad pipeline_tag: text2text-generation tags: - question answering widget: - text: "question: ¿Cuál es la población de Nueva York a partir de 2014?, context: Situada en uno de los mayores puertos naturales del mundo, la ciudad de Nueva York consta de cinco municipios, cada uno de los cuales es un condado separado del estado de Nueva York. Los cinco distritos - Brooklyn, Queens, Manhattan, el Bronx y Staten Island - se consolidaron en una sola ciudad en 1898. Con una población censada estimada en 2014 de 8.491.079 habitantes distribuidos en una superficie de solo 790 km ², Nueva York es la ciudad más densamente poblada de los Estados Unidos. Hasta 800 idiomas se hablan en Nueva York, por lo que es la ciudad más lingüísticamente diversa del mundo. Según estimaciones del censo de 2014, la región metropolitana de la ciudad de Nueva York sigue siendo por un margen significativo la más poblada de los Estados Unidos, según lo definido tanto por el Área Estadística Metropolitana (20,1 millones de residentes). En 2013, el MSA produjo un producto metropolitano bruto (GMP) de casi US $1,39 billones, mientras que en 2012, el CSA generó un GMP de más de US $1,55 billones, ambos clasificados en primer lugar." example_title: "Question Answering Example 1" - text: "question: ¿Cómo se llama el ejército personal de Sassou?, context: El progreso democrático del Congo se descarriló en 1997, cuando Lissouba y Sassou comenzaron a luchar por el poder en la guerra civil. A medida que se acercaban las elecciones presidenciales de julio de 1997, las tensiones entre los campos de Lissouba y Sassou aumentaron. El 5 de junio, las fuerzas del gobierno del presidente Lissouba rodearon el complejo de Sassou en Brazzaville y Sassou ordenó a los miembros de su milicia privada (conocida como Cobras) resistir. Así comenzó un conflicto de cuatro meses que destruyó o dañó gran parte de Brazzaville y causó decenas de miles de muertes civiles. A principios de octubre, el régimen socialista angoleño comenzó una invasión del Congo para instalar a Sassou en el poder. A mediados de octubre, el gobierno de Lissouba cayó. Poco después, Sassou se declaró presidente." example_title: "Question Answering Example 2" model-index: - name: vocabtrimmer/mt5-small-trimmed-es-30000-esquad-qa results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_esquad type: default args: default metrics: - name: BLEU4 (Question Answering) type: bleu4_question_answering value: 16.41 - name: ROUGE-L (Question Answering) type: rouge_l_question_answering value: 36.91 - name: METEOR (Question Answering) type: meteor_question_answering value: 31.2 - name: BERTScore (Question Answering) type: bertscore_question_answering value: 91.32 - name: MoverScore (Question Answering) type: moverscore_question_answering value: 76.17 - name: AnswerF1Score (Question Answering) type: answer_f1_score__question_answering value: 59.77 - name: AnswerExactMatch (Question Answering) type: answer_exact_match_question_answering value: 40.07 --- # Model Card of `vocabtrimmer/mt5-small-trimmed-es-30000-esquad-qa` This model is fine-tuned version of [ckpts/mt5-small-trimmed-es-30000](https://huggingface.co/ckpts/mt5-small-trimmed-es-30000) for question answering task on the [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [ckpts/mt5-small-trimmed-es-30000](https://huggingface.co/ckpts/mt5-small-trimmed-es-30000) - **Language:** es - **Training data:** [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="es", model="vocabtrimmer/mt5-small-trimmed-es-30000-esquad-qa") # model prediction answers = model.answer_q(list_question="¿Cuál es la población de Nueva York a partir de 2014?", list_context=" Situada en uno de los mayores puertos naturales del mundo, la ciudad de Nueva York consta de cinco municipios, cada uno de los cuales es un condado separado del estado de Nueva York. Los cinco distritos - Brooklyn, Queens, Manhattan, el Bronx y Staten Island - se consolidaron en una sola ciudad en 1898. Con una población censada estimada en 2014 de 8.491.079 habitantes distribuidos en una superficie de solo 790 km ², Nueva York es la ciudad más densamente poblada de los Estados Unidos. Hasta 800 idiomas se hablan en Nueva York, por lo que es la ciudad más lingüísticamente diversa del mundo. Según estimaciones del censo de 2014, la región metropolitana de la ciudad de Nueva York sigue siendo por un margen significativo la más poblada de los Estados Unidos, según lo definido tanto por el Área Estadística Metropolitana (20,1 millones de residentes). En 2013, el MSA produjo un producto metropolitano bruto (GMP) de casi US $1,39 billones, mientras que en 2012, el CSA generó un GMP de más de US $1,55 billones, ambos clasificados en primer lugar.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-es-30000-esquad-qa") output = pipe("question: ¿Cuál es la población de Nueva York a partir de 2014?, context: Situada en uno de los mayores puertos naturales del mundo, la ciudad de Nueva York consta de cinco municipios, cada uno de los cuales es un condado separado del estado de Nueva York. Los cinco distritos - Brooklyn, Queens, Manhattan, el Bronx y Staten Island - se consolidaron en una sola ciudad en 1898. Con una población censada estimada en 2014 de 8.491.079 habitantes distribuidos en una superficie de solo 790 km ², Nueva York es la ciudad más densamente poblada de los Estados Unidos. Hasta 800 idiomas se hablan en Nueva York, por lo que es la ciudad más lingüísticamente diversa del mundo. Según estimaciones del censo de 2014, la región metropolitana de la ciudad de Nueva York sigue siendo por un margen significativo la más poblada de los Estados Unidos, según lo definido tanto por el Área Estadística Metropolitana (20,1 millones de residentes). En 2013, el MSA produjo un producto metropolitano bruto (GMP) de casi US $1,39 billones, mientras que en 2012, el CSA generó un GMP de más de US $1,55 billones, ambos clasificados en primer lugar.") ``` ## Evaluation - ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-es-30000-esquad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_esquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 40.07 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | AnswerF1Score | 59.77 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | BERTScore | 91.32 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_1 | 26.76 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_2 | 22.12 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_3 | 18.94 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_4 | 16.41 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | METEOR | 31.2 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | MoverScore | 76.17 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | ROUGE_L | 36.91 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_esquad - dataset_name: default - input_types: ['paragraph_question'] - output_types: ['answer'] - prefix_types: None - model: ckpts/mt5-small-trimmed-es-30000 - max_length: 512 - max_length_output: 32 - epoch: 12 - batch: 32 - lr: 0.001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-es-30000-esquad-qa/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
Cat/Kitty
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 11.71 +/- 6.62 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r ShreyasM/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
Cedille/fr-boris
[ "pytorch", "gptj", "text-generation", "fr", "dataset:c4", "arxiv:2202.03371", "transformers", "causal-lm", "license:mit", "has_space" ]
text-generation
{ "architectures": [ "GPTJForCausalLM" ], "model_type": "gptj", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
401
null
--- license: creativeml-openrail-m tags: - coreml - stable-diffusion - text-to-image - not-for-all-eyes --- # Core ML Converted Model: - This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).<br> - Provide the model to an app such as Mochi Diffusion [Github](https://github.com/godly-devotion/MochiDiffusion) - [Discord](https://discord.gg/x2kartzxGv) to generate images.<br> - `split_einsum` version is compatible with all compute unit options including Neural Engine.<br> - `original` version is only compatible with CPU & GPU option.<br> - Custom resolution versions are tagged accordingly.<br> - `vae` tagged files have a vae embedded into the model.<br> - Descriptions are posted as-is from original model source. Not all features and/or results may be available in CoreML format.<br> - This model was converted with `vae-encoder` for i2i. - Models that are 32 bit will have "fp32" in the filename. # Note: Some models do not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml). # RealBiter: Source(s): [CivitAI](https://civitai.com/models/16592/realbiter) This blend model aims to achieve versatility in generating images that are almost realistic but with a touch of fantasy and a polished aesthetic. It performs well with both illustrations and simulated photographs, but it particularly excels with the latter. It is recommended to use an appropriate VAE to avoid color gradients, and the vae-ft-mse-840000-ema-pruned model works especially well. The blend includes the Offset Noise model, which enhances contrasts and black processing. Incorporating an Offset Noise LORA in image generation can further enhance the results. The model is not specifically designed for NSFW content, but it easily generates this type of images. The model has been reviewed with CLIP tensors checker and it shows no deviation.
dccuchile/albert-base-spanish-finetuned-mldoc
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
34
null
--- pipeline_tag: text2text-generation --- # Further instruct Tuning stanford alpaca based on <a href="https://huggingface.co/tloen/alpaca-lora-7" target="_blank">tloen/alpaca-lora-7b</a> on <a href="https://www.consumptionvoucher.gov.hk/public/pdf/2023cvs/FAQ-2023_en.pdf" target="_blank">Hong Kong 2023 Consumption Voucher Scheme Frequently Asked Questions</a> ## How to use it ```python from transformers import LlamaForCausalLM, LlamaTokenizer,GenerationConfig from peft import PeftModel device_map = "auto" tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf") model = LlamaForCausalLM.from_pretrained( "decapoda-research/llama-7b-hf", load_in_8bit=True, device_map="auto", ) ### load model after fine tuned on alpaca datasets model = PeftModel.from_pretrained(model, "Nelsonlin0321/alpaca-lora-7b-tuned-on-hk-cvs-fqa") tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf") tokenizer.pad_token_id = 0 def generate_prompt_eval(instruction): template = f"""Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Response:""" return template eval_generation_config = GenerationConfig( temperature=0.1, top_p=0.75, num_beams=4, ) def generate_answer(instruction): prompt = generate_prompt_eval(instruction) inputs = tokenizer(prompt, return_tensors="pt") input_ids = inputs["input_ids"].cuda() generation_output = model.generate( input_ids=input_ids, generation_config=eval_generation_config, return_dict_in_generate=True, output_scores=True, max_new_tokens=256 ) for s in generation_output.sequences: output = tokenizer.decode(s) # print(output) print("Response:", output.split("### Response:")[1].strip()) question = "Who are eligible to be disbursed with the first-instalment voucher of $1,500 on 16 April?" generate_answer(question) >> Response: All eligible people who have successfully registered under 2022 CVS and met the relevant eligibility criteria will be disbursed with the first-instalment voucher of $1,500 on 16 April. ```
dccuchile/albert-base-spanish-finetuned-ner
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- license: creativeml-openrail-m language: - ja pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image library_name: diffusers --- # Riga_collectionとは?<br> Riga_collectionは様々なモデルをマージしたモデルです<br> HimawariMixと同じで背景や細部の表現が強いモデルです<br> イメージ的にはHimawariMixをりがさんのアイディアを元にチューニングしたようなモデルです!<br> VAEは標準搭載になります<br> --- # README ✅=Allowed<br> ❌=Not allowed<br> ❌ 本モデルを商用の画像生成サービスで利用する行為 <br>  Use of this model for commercial image generation services  ❌ 本モデルや本モデルをマージしたモデルを販売する行為<br>  The act of selling this model or a model merged with this model  ❌ 本モデルを使用し意図的に違法な出力をする行為 <br>  Intentionally using this model to produce illegal output  ❌ 本モデルをマージしたモデルに異なる権限を与える行為 <br>  Have different permissions when sharing  ❌ 本モデルをマージしたモデルを配布または本モデルを再配布した際に同じ使用制限を含め、CreativeML OpenRAIL-M のコピーをすべてのユーザーと共有しない行為 <br>  The act of not sharing a copy of CreativeML OpenRAIL-M with all users, including the same usage restrictions when distributing or redistributing a merged model of this model.  ✅ 本モデルで生成した画像を商用利用する行為 <br>  Commercial use of images generated by this model  ✅ 本モデルを使用したマージモデルを使用または再配布する行為 <br>  Use or redistribution of merged models using this model  ✅ 本モデルのクレジット表記をせずに使用する行為 <br>  Use of this model without crediting the model ❌ 以下の説明に違反する行為 <br>  Violation of the following description --- ### 本モデルのおすすめ設定 Positive: <br> Loli<br> Negative: <br> (worst quality, low quality:1.4), EasyNegative, bad_prompt_version2 , nsfw, (monocrome:1.8) <br> --- # 作者&連絡先 Twitter:<br> 私(みん) @min__san<br> りがさん@riga652<br>
dccuchile/albert-base-spanish-finetuned-qa-mlqa
[ "pytorch", "albert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "AlbertForQuestionAnswering" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Find your model_id: JfuentesR/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
dccuchile/albert-base-spanish-finetuned-xnli
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: flan-t5-small-gss results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-small-gss This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.0813 - Rouge1: 7.7224 - Rouge2: 5.9202 - Rougel: 7.4878 - Rougelsum: 7.3009 - Gen Len: 9.1892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 10 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 33 | 8.4046 | 10.9442 | 6.6413 | 11.3804 | 11.0126 | 10.5946 | | No log | 2.0 | 66 | 4.2661 | 5.3251 | 1.7316 | 5.3213 | 5.2358 | 7.9730 | | No log | 3.0 | 99 | 3.5016 | 0.6359 | 0.0 | 0.318 | 0.318 | 3.7838 | | No log | 4.0 | 132 | 3.1661 | 3.3576 | 2.3166 | 3.0259 | 3.0233 | 6.1351 | | No log | 5.0 | 165 | 3.0813 | 7.7224 | 5.9202 | 7.4878 | 7.3009 | 9.1892 | ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
dccuchile/albert-large-spanish-finetuned-mldoc
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- license: apache-2.0 tags: - text-classification - generated_from_trainer datasets: - glue metrics: - accuracy - f1 widget: - text: ["Yucaipa owned Dominick 's before selling the chain to Safeway in 1998 for $ 2.5 billion.", "Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."] example_title: Not Equivalent - text: ["Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier.", "With the scandal hanging over Stewart's company revenue the first quarter of the year dropped 15 percent from the same period a year earlier."] example_title: Equivalent model-index: - name: platzi results: - task: name: Text Classification type: text-classification dataset: name: datasetX type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8259803921568627 - name: F1 type: f1 value: 0.8657844990548204 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset. It achieves the following results on the evaluation set: - Loss: 0.4514 - Accuracy: 0.8260 - F1: 0.8658 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5383 | 1.09 | 500 | 0.4514 | 0.8260 | 0.8658 | | 0.3727 | 2.18 | 1000 | 0.4630 | 0.8333 | 0.8764 | ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2