modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-28 00:48:09
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
534 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-28 00:47:12
card
stringlengths
11
1.01M
Avitas8485/Dialogpt-medium-v2
Avitas8485
2023-06-22T02:05:05Z
109
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-27T05:26:04Z
--- pipeline_tag: conversational ---
natope/mT5-tfidf-10pass-all-questions-QA-22-06-2023
natope
2023-06-22T01:59:17Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-06-22T00:35:38Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: mT5-tfidf-10pass-all-questions-QA-22-06-2023 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mT5-tfidf-10pass-all-questions-QA-22-06-2023 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1052 - Rouge1: 0.135 - Rouge2: 0.0293 - Rougel: 0.1091 - Rougelsum: 0.1091 - Gen Len: 18.3641 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 3.3074 | 1.0 | 3288 | 2.3090 | 0.0802 | 0.0067 | 0.0711 | 0.0711 | 15.4922 | | 2.7161 | 2.0 | 6576 | 2.1227 | 0.0805 | 0.0166 | 0.0665 | 0.0664 | 13.4977 | | 2.6099 | 3.0 | 9864 | 2.1052 | 0.135 | 0.0293 | 0.1091 | 0.1091 | 18.3641 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
mike-ravkine/BlueHeeler-12M
mike-ravkine
2023-06-22T01:55:14Z
6
0
transformers
[ "transformers", "gpt2", "text-generation", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-06-22T01:44:07Z
--- license: mit language: - en pipeline_tag: text-generation widget: - text: 'Bluey:' example_title: Dialogue 1 - text: 'Mom:' example_title: Dialogue 2 library_name: transformers --- BlueHeeler-10M is a nanoGPT (GPT-2) 6-head x 6-layer x 192-deep model with a context size of 64 trained on scripts from the children's show Bluey `iter 2000: loss 1.2913, time 30647.72ms, mfu 0.05%`
benbav97/ppo-LunarLander-v2
benbav97
2023-06-22T01:54:37Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-22T01:41:22Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 278.12 +/- 17.65 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
johnpaulbin/gpt2-skript-1m-v5
johnpaulbin
2023-06-22T01:48:20Z
119
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
## GPT-2 for Skript ## Complete your Skript automatically via a finetuned GPT-2 model `0.57` Training loss on about 2 epochs (in total) 1.2 million lines of Skript is inside the dataset. Inference Colab: https://colab.research.google.com/drive/1ujtLt7MOk7Nsag3q-BYK62Kpoe4Lr4PE
bluemoonwj/my_awesome_eli5_clm-model
bluemoonwj
2023-06-22T01:34:22Z
161
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-22T00:53:06Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: my_awesome_eli5_clm-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_eli5_clm-model This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.7297 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.8699 | 1.0 | 1109 | 3.7485 | | 3.7734 | 2.0 | 2218 | 3.7342 | | 3.7371 | 3.0 | 3327 | 3.7297 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
agustinl/ppo-Huggy
agustinl
2023-06-22T01:29:30Z
13
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-06-22T01:29:20Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: agustinl/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
zslrmhb/ppo-LunarLander-v2
zslrmhb
2023-06-22T00:59:42Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-20T18:25:42Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 285.48 +/- 18.94 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
cactusfriend/nightmare-invokeai-prompts
cactusfriend
2023-06-22T00:48:13Z
126
6
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neo", "text-generation", "license:openrail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-04-02T15:30:11Z
--- license: openrail pipeline_tag: text-generation library_name: transformers widget: - text: "a photograph of" example_title: "photo" - text: "a bizarre cg render" example_title: "render" - text: "the spaghetti" example_title: "meal?" - text: "a (detailed+ intricate)+ picture" example_title: "weights" - text: "photograph of various" example_title: "variety" inference: parameters: temperature: 2.6 max_new_tokens: 250 --- A model based upon the prompts of all the images in my InvokeAI's output directory, meant to be used with [InvokeAI](https://github.com/invoke-ai/InvokeAI) (a Stable Diffusion implementation/UI) to generate new, probably wild nightmare images. This is mostly trained on positive prompts, though you may catch some words in [] brackets, which will be treated as negative. GPT-Neo is usually quite good at pairing parenthesis, quotation marks, etc - however, don't be too surprised if it generates something that's not quite InvokeAI prompt syntax. To use this model, you can import it as a pipeline like so: ```py from transformers import pipeline generator = pipeline(model="cactusfriend/nightmare-invokeai-prompts", tokenizer="cactusfriend/nightmare-invokeai-prompts", task="text-generation") ``` Here's an example function that'll generate by default 20 prompts, at a temperature of 1.8 which seems good for this model. ```py def makePrompts(prompt: str, *, p: float=0.9, k: int = 40, num: int = 20, temp: float = 1.8, mnt: int = 150): outputs = generator(prompt, max_new_tokens=mnt, temperature=temp, do_sample=True, top_p=p, top_k=k, num_return_sequences=num) items = set([i['generated_text'] for i in outputs]) print("-" * 60) print("\n ---\n".join(items)) print("-" * 60) ``` Then, you can call it like so: ```py makePrompts("a photograph of") # or, to change some defaults: makePrompts("spaghetti all over", temp=1.4, p=0.92, k=45) ```
MiguelQr/ppo-LunarLander-v2
MiguelQr
2023-06-22T00:45:26Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-22T00:45:06Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 243.57 +/- 34.67 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Leuserrrr/finetuning-sentiment-model-amazonbaby5000
Leuserrrr
2023-06-22T00:43:37Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-21T23:57:50Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: finetuning-sentiment-model-amazonbaby5000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-amazonbaby5000 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8039 - Accuracy: 0.9008 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.13.0 - Tokenizers 0.11.0
mihirdeo16/vizdoom_health_gathering_supreme
mihirdeo16
2023-06-22T00:11:01Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-21T05:12:55Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 10.73 +/- 4.77 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r mihirdeo16/vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
sxx123/finetune_jingzhan
sxx123
2023-06-22T00:10:38Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:customized", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-22T00:07:26Z
--- tags: - generated_from_trainer datasets: - customized model-index: - name: finetune_jingzhan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetune_jingzhan This model is a fine-tuned version of [/home/sxx/LMFlow/models/gpt2](https://huggingface.co//home/sxx/LMFlow/models/gpt2) on the customized dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0.01 ### Training results ### Framework versions - Transformers 4.28.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.10.1 - Tokenizers 0.13.3
agustinl/dqn-LunarLander-v2
agustinl
2023-06-21T23:59:16Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-21T23:58:50Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -51.76 +/- 82.58 name: mean_reward verified: false --- # **DQN** Agent playing **LunarLander-v2** This is a trained model of a **DQN** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ztjona/scopic-diffusion-OW-v1.4.1
ztjona
2023-06-21T23:53:15Z
11
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "dataset:ztjona/oswaldo-guayasamin-blip-captions-v2", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-21T15:54:37Z
--- tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image widget: - text: city and clouds example_title: city and clouds - text: tea party example_title: tea party - text: mother working example_title: mother working - text: buddhist monk example_title: buddhist monk datasets: - ztjona/oswaldo-guayasamin-blip-captions-v2 language: - en library_name: diffusers pipeline_tag: text-to-image --- ### Model Description <!-- Provide a longer summary of what this model is. --> - **Finetuned from model:** CompVis/stable-diffusion-v1-4
hannahh7/a2c-PandaReachDense-v2
hannahh7
2023-06-21T22:26:50Z
4
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-15T10:20:48Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -6.04 +/- 3.80 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
hts98/whisper-large-paper_
hts98
2023-06-21T22:18:40Z
2
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-06-21T18:32:49Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-large-paper_ results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-large-paper_ This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4374 - Wer: 47.9863 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | No log | 1.0 | 143 | 0.3754 | 47.3394 | | No log | 2.0 | 286 | 0.3418 | 44.5511 | | No log | 3.0 | 429 | 0.3522 | 47.7507 | | 0.3895 | 4.0 | 572 | 0.3795 | 48.9312 | | 0.3895 | 5.0 | 715 | 0.4091 | 51.5160 | | 0.3895 | 6.0 | 858 | 0.4374 | 47.9863 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.7.0 - Tokenizers 0.13.2
enkaell/short-jokes
enkaell
2023-06-21T22:11:02Z
5
0
transformers
[ "transformers", "gpt2", "text-generation", "en", "dataset:Fraser/short-jokes", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-06-21T16:38:44Z
--- datasets: - Fraser/short-jokes language: - en ---
thisjustinh/falcon-7b-cnn-dailymail
thisjustinh
2023-06-21T22:01:30Z
0
0
null
[ "text-generation-inference", "dataset:cnn_dailymail", "license:apache-2.0", "region:us" ]
null
2023-06-20T01:05:22Z
--- tags: - text-generation-inference datasets: - cnn_dailymail model-index: - name: falcon-7b-cnn-dailymail results: [] license: apache-2.0 --- # falcon-7b-cnn-dailymail This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the cnn_dailymail dataset. ## Model description The model inherits the architecture and tokenizer from falcon-7b, but was finetuned using 4-bit quantization from `bitsandbytes` and QLORA from the `peft` library. The HuggingFace `trl` library has a SFTTrainer class that oversaw the fine-tune process. The resulting model comes from fine-tuning on a single NVIDIA L4 instance (24 GB VRAM) from Google Cloud Platform. ## Intended uses & limitations The model is intended to be used for summarizing news articles. Since the fine-tuning dataset is cnn_dailymail, it's worth limiting to shorter articles from CNN and the Daily Mail for best results. The model is not intended for other summarization purposes, although it would be interesting to see if its summarization capabilities extend to other short forms of text. ## Training and evaluation data The model was fine-tuned over the [cnn_dailymail](https://huggingface.co/datasets/cnn_dailymail) dataset (the train set specifically), where articles were the "prompts" and highlights were the "responses." Prior to training, the two columns were combined for the causal LM task. Each observation was formatted as the following: ``` ### Article Article goes here... ### Summary Highlights go here... ``` For inference, formatting the article in the same way and finishing with the summary tag indicates that the model should generate a summary. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 5 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 5 - total_train_batch_size: 25 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 500 ### Training results Good question, haven't really looked into it yet. Also worth noting that these are generally arbitrary hyperparameters, since no tuning was performed. ### Framework versions - Transformers 4.30.0.dev0 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
VMware/bert-tiny-mrqa
VMware
2023-06-21T21:59:31Z
171
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "question-answering", "en", "dataset:mrqa", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
question-answering
2023-02-17T20:52:30Z
--- license: apache-2.0 datasets: - mrqa language: - en metrics: - exact_match - f1 model-index: - name: VMware/TinyRoBERTa-MRQA results: - task: type: Question-Answering dataset: type: mrqa # Required. Example: common_voice. Use dataset id from https://hf.co/datasets name: mrqa # Required. A pretty name for the dataset. Example: Common Voice (French) metrics: - type: exact_match value: 22.78 name: Eval EM - type: f1 value: 32.42 name: Eval F1 - type: exact_match value: 10.18 name: Test EM - type: f1 value: 18.72 name: Test F1 --- This model release is part of a joint research project with Howard University's Innovation Foundry/AIM-AHEAD Lab. # Model Details - **Model name:** BERT-Tiny-MRQA - **Model type:** Extractive Question Answering - **Parent Model:** [BERT-Tiny-uncased](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) - **Training dataset:** [MRQA](https://huggingface.co/datasets/mrqa) (Machine Reading for Question Answering) - **Training data size:** 516,819 examples - **Training time:** 26:11 on 1 Nvidia V100 32GB GPU - **Language:** English - **Framework:** PyTorch - **Model version:** 1.0 # Intended Use This model is intended to provide accurate answers to questions based on context passages. It can be used for a variety of tasks, including question-answering for search engines, chatbots, customer service systems, and other applications that require natural language understanding. # How to Use ```python from transformers import pipeline question_answerer = pipeline("question-answering", model='VMware/bert-tiny-mrqa') context = "We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT." question = "What is MRQA?" result = question_answerer(question=question, context=context) print(result) # { # 'score': 0.134057879447937, # 'start': 76, # 'end': 80, # 'answer': '2019' # } ``` Yes, you read that correctly ... this model thinks MRQA is "2019". Look at its eval and test scores. A coin toss is more likely to get you a decent answer, haha. # Training Details The model was trained for 1 epoch on the MRQA training set. ## Training Hyperparameters ```python args = TrainingArguments( "bert-tiny-mrqa", save_strategy="epoch", learning_rate=1e-5, num_train_epochs=1, weight_decay=0.01, per_device_train_batch_size=16, ) ``` # Evaluation Metrics The model was evaluated using standard metrics for question-answering models, including: Exact match (EM): The percentage of questions for which the model produces an exact match with the ground truth answer. F1 score: A weighted average of precision and recall, which measures the overlap between the predicted answer and the ground truth answer. # Model Family Performance | Parent Language Model | Number of Parameters | Training Time | Eval Time | Test Time | Eval EM | Eval F1 | Test EM | Test F1 | |---|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | BERT-Tiny | 4,369,666 | 26:11 | 0:41 | 0:04 | 22.78 | 32.42 | 10.18 | 18.72 | | BERT-Base | 108,893,186 | 8:39:10 | 18:42 | 2:13 | 64.48 | 76.14 | 48.89 | 59.89 | | BERT-Large | 334,094,338 | 28:35:38 | 1:00:56 | 7:14 | 69.52 | 80.50 | 55.00 | 65.78 | | DeBERTa-v3-Extra-Small | 70,682,882 | 5:19:05 | 11:29 | 1:16 | 65.58 | 77.17 | 50.92 | 62.58 | | DeBERTa-v3-Base | 183,833,090 | 12:13:41 | 28:18 | 3:09 | 71.43 | 82.59 | 59.49 | 70.46 | | DeBERTa-v3-Large | 434,014,210 | 38:36:13 | 1:25:47 | 9:33 | **76.08** | **86.23** | **64.27** | **75.22** | | ELECTRA-Small | 13,483,522 | 2:16:36 | 3:55 | 0:27 | 57.63 | 69.38 | 38.68 | 51.56 | | ELECTRA-Base | 108,893,186 | 8:40:57 | 18:41 | 2:12 | 68.78 | 80.16 | 54.70 | 65.80 | | ELECTRA-Large | 334,094,338 | 28:31:59 | 1:00:40 | 7:13 | 74.15 | 84.96 | 62.35 | 73.28 | | MiniLMv2-L6-H384-from-BERT-Large | 22,566,146 | 2:12:48 | 4:23 | 0:40 | 59.31 | 71.09 | 41.78 | 53.30 | | MiniLMv2-L6-H768-from-BERT-Large | 66,365,954 | 4:42:59 | 10:01 | 1:10 | 64.27 | 75.84 | 49.05 | 59.82 | | MiniLMv2-L6-H384-from-RoBERTa-Large | 30,147,842 | 2:15:10 | 4:19 | 0:30 | 59.27 | 70.64 | 42.95 | 54.03 | | MiniLMv2-L12-H384-from-RoBERTa-Large | 40,794,626 | 4:14:22 | 8:27 | 0:58 | 64.58 | 76.23 | 51.28 | 62.83 | | MiniLMv2-L6-H768-from-RoBERTa-Large | 81,529,346 | 4:39:02 | 9:34 | 1:06 | 65.80 | 77.17 | 51.72 | 63.27 | | TinyRoBERTa | 81,529.346 | 4:27:06\* | 9:54 | 1:04 | 69.38 | 80.07 | 53.29 | 64.16 | | RoBERTa-Base | 124,056,578 | 8:50:29 | 18:59 | 2:11 | 69.06 | 80.08 | 55.53 | 66.49 | | RoBERTa-Large | 354,312,194 | 29:16:06 | 1:01:10 | 7:04 | 74.08 | 84.38 | 62.20 | 72.88 | \* TinyRoBERTa's training time isn't directly comparable to the other models since it was distilled from [VMware/roberta-large-mrqa](https://huggingface.co/VMware/roberta-large-mrqa) that was already trained on MRQA. # Limitations and Bias The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include: - Language: The model is designed to work with English text only and may not perform as well on other languages. - Domain-specific knowledge: The model has been trained on a general dataset and may not perform well on questions that require domain-specific knowledge. - Out-of-distribution questions: The model may struggle with questions that are outside the scope of the MRQA dataset. This is best demonstrated by the delta between its scores on the eval vs test datasets. In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
owaiskaifi/ai-qr-generator
owaiskaifi
2023-06-21T21:42:57Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2023-06-21T21:40:17Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Mursel/falcon-7b-instruct-finetuned
Mursel
2023-06-21T21:32:44Z
0
0
peft
[ "peft", "region:us" ]
null
2023-06-13T13:15:02Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
ShaneEP77/tolkientexts
ShaneEP77
2023-06-21T20:54:01Z
12
1
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "text generation", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-12T14:46:14Z
--- language: en thumbnail: "https://www.billboard.com/wp-content/uploads/media/Middle-earth-Shadow-of-War-GAME-Screenshot-2017-billboard-1548.jpg" tags: - text generation - pytorch license: mit --- ### Tolkientexts Model Welcome! This README.md aims to provide a synopsis of how this model was trained and fine-tuned. Additonally, code examples will be included with information on how to use this model. ## Description This model was trained on 4 novels written by J.R.R. Tolkien that were accessed via open source from the internet and through (https://www.kaggle.com/), which is an open source hub for datasets and data science projects. The style is that of J.R.R. Tolkien, which is fantasy-esque with vivid and complex descriptions as well as being poetic and medieval. ## Downstream Uses This model can be used for fans of Tolkien's work for entertainment purposes. ## Recommended Usage The recommended usage of this model is with Kobold AI Colab. Click one of the links below and where you are prompted to select a **Model:** there will be a drop down menu. Type "ShaneEP77/tolkientexts" into that drop down menu and select that model. A clickable link will load for you to click on, and from there you can either enter text right away, or you can toggle to "New Game/Story" and the options "Blank Game/Story" and "Random Game/Story" are available. Links to the GPU and TPU version can be found below: 1. **GPU**: https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/GPU.ipynb 2. **TPU**: https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/TPU.ipynb ## Example Code ``` from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained('ShaneEP77/tolkientexts') tokenizer = AutoTokenizer.from_pretrained('ShaneEP77/tolkientexts') prompt = '''In the deep twilight of the Shire, beneath a sky adorned with a tapestry of shimmering stars, Bilbo Baggins embarked on a journey with an old friend, Gandalf.''' input_ids = tokenizer.encode(prompt, return_tensors='pt') ouput = model.generate(input_ids, do_sample = True, temperature = 0.8, top_p=0.85, top_k = 50, typical_p = 0.9, repition_penalty=1.5, max_length=len(input_ids[0])+100, pad_token_id=tokenizer.eos_token_id) generated_text = tokenizer.decode(output[0]) print(generated_text) ``` ## tolkientexts This model is a fine-tuned version of **EleutherAI/pythia-2.8b-deduped** (https://huggingface.co/EleutherAI/pythia-2.8b-deduped) on **CoreWeave's** infrastructure (https://www.coreweave.com/). **The books that the model was trained on include the following novels all written by J.R.R. Tolkien, which made up 1.48MiB of text:** * "The Hobbit" * "The Lord of the Rings: The Fellowship of the Ring" * "The Lord of the Rings: The Two Towers" * "The Lord of the Rings: The Return of the King" **Epochs:** 1 **Steps:** 500 ## loss and accuracy Runs of the model were logged with Weights and Biases (https://wandb.ai/site). Charts were created based on 10-20 runs of the model and show a downward trend for loss as the number of steps increase. On the other hand, there appears to be an upward trend for accuracy as the number of steps increases. ![loss](image1.png) ![accuracy](image2.png) ## Meet the Team and Acknowledgements! * Shane Epstein-Petrullo - Author * CoreWeave- Computation Materials *A huge thanks goes out to Wes Brown, David Finster, and Rex Wang for help with this project!* *Referencing CoreWeave's tutorial and finetuner doc was pivotal to this project. This document can be found at (https://docs.coreweave.com/~/changes/UdikeGislByaE9hH8a7T/machine-learning-and-ai/training/fine-tuning/finetuning-machine-learning-models).*
sertemo/bert-finetuned-ner
sertemo
2023-06-21T20:37:43Z
103
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-06-21T20:11:02Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.9350520575111552 - name: Recall type: recall value: 0.9522046449007069 - name: F1 type: f1 value: 0.9435504044025682 - name: Accuracy type: accuracy value: 0.9867840113027609 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0606 - Precision: 0.9351 - Recall: 0.9522 - F1: 0.9436 - Accuracy: 0.9868 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0874 | 1.0 | 1756 | 0.0674 | 0.9167 | 0.9313 | 0.9240 | 0.9818 | | 0.0352 | 2.0 | 3512 | 0.0628 | 0.9230 | 0.9446 | 0.9337 | 0.9855 | | 0.0175 | 3.0 | 5268 | 0.0606 | 0.9351 | 0.9522 | 0.9436 | 0.9868 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.11.0 - Tokenizers 0.13.3
antphb/DS-Chatbox-facebook-xglm-564M-V4-FT
antphb
2023-06-21T20:36:48Z
20
0
transformers
[ "transformers", "pytorch", "xglm", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-06-21T18:17:03Z
--- license: mit tags: - generated_from_trainer model-index: - name: DS-Chatbox-facebook-xglm-564M-V4-FT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DS-Chatbox-facebook-xglm-564M-V4-FT This model is a fine-tuned version of [antphb/DS-Chatbox-facebook-xglm-564M-V3](https://huggingface.co/antphb/DS-Chatbox-facebook-xglm-564M-V3) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 1.3576 - eval_runtime: 5.133 - eval_samples_per_second: 51.822 - eval_steps_per_second: 25.911 - epoch: 12.65 - step: 5200 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.5e-05 - train_batch_size: 8 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 200 - num_epochs: 15 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.0 - Tokenizers 0.13.3
agshruti/distilbert-base-uncased-finetuned-imdb-r3
agshruti
2023-06-21T20:35:41Z
61
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-06-21T20:33:00Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: agshruti/distilbert-base-uncased-finetuned-imdb-r3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # agshruti/distilbert-base-uncased-finetuned-imdb-r3 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.2879 - Validation Loss: 2.9902 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -997, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.2879 | 2.9902 | 0 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.13.0 - Tokenizers 0.13.3
henri28/final_tcc_model
henri28
2023-06-21T20:32:39Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-06-21T16:42:43Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - sacrebleu model-index: - name: final_tcc_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # final_tcc_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7783 - Sacrebleu: 7.6467 - Gen Len: 17.9035 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Sacrebleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:| | No log | 1.0 | 275 | 0.8201 | 7.1607 | 17.8917 | | 0.9564 | 2.0 | 550 | 0.7971 | 7.3848 | 17.9008 | | 0.9564 | 3.0 | 825 | 0.7862 | 7.5097 | 17.909 | | 0.8977 | 4.0 | 1100 | 0.7803 | 7.5882 | 17.9035 | | 0.8977 | 5.0 | 1375 | 0.7783 | 7.6467 | 17.9035 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cpu - Datasets 2.13.0 - Tokenizers 0.13.3
DigKingy/ToonYou-JP-Alpha1
DigKingy
2023-06-21T20:26:32Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-21T20:26:32Z
--- license: creativeml-openrail-m ---
magnustragardh/distilhubert-finetuned-gtzan
magnustragardh
2023-06-21T20:16:30Z
160
0
transformers
[ "transformers", "pytorch", "tensorboard", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-06-21T18:53:18Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-gtzan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.7058 - Accuracy: 0.79 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7675 | 1.0 | 112 | 1.8184 | 0.42 | | 1.2504 | 2.0 | 225 | 1.3015 | 0.62 | | 1.0353 | 3.0 | 337 | 0.9890 | 0.72 | | 0.8318 | 4.0 | 450 | 0.8237 | 0.8 | | 0.4429 | 5.0 | 562 | 0.8123 | 0.78 | | 0.4286 | 6.0 | 675 | 0.6820 | 0.8 | | 0.2553 | 7.0 | 787 | 0.7826 | 0.78 | | 0.3022 | 8.0 | 900 | 0.6811 | 0.77 | | 0.1889 | 9.0 | 1012 | 0.6761 | 0.8 | | 0.1073 | 9.96 | 1120 | 0.7058 | 0.79 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
newsrx/instructor-large-newsrx
newsrx
2023-06-21T20:05:33Z
7
0
sentence-transformers
[ "sentence-transformers", "pytorch", "t5", "text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "prompt-retrieval", "text-reranking", "feature-extraction", "sentence-similarity", "transformers", "English", "Sentence Similarity", "natural_questions", "ms_marco", "fever", "hotpot_qa", "mteb", "en", "arxiv:2212.09741", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "region:us" ]
sentence-similarity
2023-06-21T20:05:33Z
--- pipeline_tag: sentence-similarity tags: - text-embedding - embeddings - information-retrieval - beir - text-classification - language-model - text-clustering - text-semantic-similarity - text-evaluation - prompt-retrieval - text-reranking - sentence-transformers - feature-extraction - sentence-similarity - transformers - t5 - English - Sentence Similarity - natural_questions - ms_marco - fever - hotpot_qa - mteb language: en inference: false license: apache-2.0 model-index: - name: INSTRUCTOR results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 88.13432835820896 - type: ap value: 59.298209334395665 - type: f1 value: 83.31769058643586 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 91.526375 - type: ap value: 88.16327709705504 - type: f1 value: 91.51095801287843 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.856 - type: f1 value: 45.41490917650942 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 31.223 - type: map_at_10 value: 47.947 - type: map_at_100 value: 48.742000000000004 - type: map_at_1000 value: 48.745 - type: map_at_3 value: 43.137 - type: map_at_5 value: 45.992 - type: mrr_at_1 value: 32.432 - type: mrr_at_10 value: 48.4 - type: mrr_at_100 value: 49.202 - type: mrr_at_1000 value: 49.205 - type: mrr_at_3 value: 43.551 - type: mrr_at_5 value: 46.467999999999996 - type: ndcg_at_1 value: 31.223 - type: ndcg_at_10 value: 57.045 - type: ndcg_at_100 value: 60.175 - type: ndcg_at_1000 value: 60.233000000000004 - type: ndcg_at_3 value: 47.171 - type: ndcg_at_5 value: 52.322 - type: precision_at_1 value: 31.223 - type: precision_at_10 value: 8.599 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 19.63 - type: precision_at_5 value: 14.282 - type: recall_at_1 value: 31.223 - type: recall_at_10 value: 85.989 - type: recall_at_100 value: 99.075 - type: recall_at_1000 value: 99.502 - type: recall_at_3 value: 58.89 - type: recall_at_5 value: 71.408 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 43.1621946393635 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 32.56417132407894 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 64.29539304390207 - type: mrr value: 76.44484017060196 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_spearman value: 84.38746499431112 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 78.51298701298701 - type: f1 value: 77.49041754069235 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 37.61848554098577 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 31.32623280148178 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 35.803000000000004 - type: map_at_10 value: 48.848 - type: map_at_100 value: 50.5 - type: map_at_1000 value: 50.602999999999994 - type: map_at_3 value: 45.111000000000004 - type: map_at_5 value: 47.202 - type: mrr_at_1 value: 44.635000000000005 - type: mrr_at_10 value: 55.593 - type: mrr_at_100 value: 56.169999999999995 - type: mrr_at_1000 value: 56.19499999999999 - type: mrr_at_3 value: 53.361999999999995 - type: mrr_at_5 value: 54.806999999999995 - type: ndcg_at_1 value: 44.635000000000005 - type: ndcg_at_10 value: 55.899 - type: ndcg_at_100 value: 60.958 - type: ndcg_at_1000 value: 62.302 - type: ndcg_at_3 value: 51.051 - type: ndcg_at_5 value: 53.351000000000006 - type: precision_at_1 value: 44.635000000000005 - type: precision_at_10 value: 10.786999999999999 - type: precision_at_100 value: 1.6580000000000001 - type: precision_at_1000 value: 0.213 - type: precision_at_3 value: 24.893 - type: precision_at_5 value: 17.740000000000002 - type: recall_at_1 value: 35.803000000000004 - type: recall_at_10 value: 68.657 - type: recall_at_100 value: 89.77199999999999 - type: recall_at_1000 value: 97.67 - type: recall_at_3 value: 54.066 - type: recall_at_5 value: 60.788 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 33.706 - type: map_at_10 value: 44.896 - type: map_at_100 value: 46.299 - type: map_at_1000 value: 46.44 - type: map_at_3 value: 41.721000000000004 - type: map_at_5 value: 43.486000000000004 - type: mrr_at_1 value: 41.592 - type: mrr_at_10 value: 50.529 - type: mrr_at_100 value: 51.22 - type: mrr_at_1000 value: 51.258 - type: mrr_at_3 value: 48.205999999999996 - type: mrr_at_5 value: 49.528 - type: ndcg_at_1 value: 41.592 - type: ndcg_at_10 value: 50.77199999999999 - type: ndcg_at_100 value: 55.383 - type: ndcg_at_1000 value: 57.288 - type: ndcg_at_3 value: 46.324 - type: ndcg_at_5 value: 48.346000000000004 - type: precision_at_1 value: 41.592 - type: precision_at_10 value: 9.516 - type: precision_at_100 value: 1.541 - type: precision_at_1000 value: 0.2 - type: precision_at_3 value: 22.399 - type: precision_at_5 value: 15.770999999999999 - type: recall_at_1 value: 33.706 - type: recall_at_10 value: 61.353 - type: recall_at_100 value: 80.182 - type: recall_at_1000 value: 91.896 - type: recall_at_3 value: 48.204 - type: recall_at_5 value: 53.89699999999999 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 44.424 - type: map_at_10 value: 57.169000000000004 - type: map_at_100 value: 58.202 - type: map_at_1000 value: 58.242000000000004 - type: map_at_3 value: 53.825 - type: map_at_5 value: 55.714 - type: mrr_at_1 value: 50.470000000000006 - type: mrr_at_10 value: 60.489000000000004 - type: mrr_at_100 value: 61.096 - type: mrr_at_1000 value: 61.112 - type: mrr_at_3 value: 58.192 - type: mrr_at_5 value: 59.611999999999995 - type: ndcg_at_1 value: 50.470000000000006 - type: ndcg_at_10 value: 63.071999999999996 - type: ndcg_at_100 value: 66.964 - type: ndcg_at_1000 value: 67.659 - type: ndcg_at_3 value: 57.74399999999999 - type: ndcg_at_5 value: 60.367000000000004 - type: precision_at_1 value: 50.470000000000006 - type: precision_at_10 value: 10.019 - type: precision_at_100 value: 1.29 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 25.558999999999997 - type: precision_at_5 value: 17.467 - type: recall_at_1 value: 44.424 - type: recall_at_10 value: 77.02 - type: recall_at_100 value: 93.738 - type: recall_at_1000 value: 98.451 - type: recall_at_3 value: 62.888 - type: recall_at_5 value: 69.138 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.294 - type: map_at_10 value: 34.503 - type: map_at_100 value: 35.641 - type: map_at_1000 value: 35.724000000000004 - type: map_at_3 value: 31.753999999999998 - type: map_at_5 value: 33.190999999999995 - type: mrr_at_1 value: 28.362 - type: mrr_at_10 value: 36.53 - type: mrr_at_100 value: 37.541000000000004 - type: mrr_at_1000 value: 37.602000000000004 - type: mrr_at_3 value: 33.917 - type: mrr_at_5 value: 35.358000000000004 - type: ndcg_at_1 value: 28.362 - type: ndcg_at_10 value: 39.513999999999996 - type: ndcg_at_100 value: 44.815 - type: ndcg_at_1000 value: 46.839 - type: ndcg_at_3 value: 34.02 - type: ndcg_at_5 value: 36.522 - type: precision_at_1 value: 28.362 - type: precision_at_10 value: 6.101999999999999 - type: precision_at_100 value: 0.9129999999999999 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 14.161999999999999 - type: precision_at_5 value: 9.966 - type: recall_at_1 value: 26.294 - type: recall_at_10 value: 53.098 - type: recall_at_100 value: 76.877 - type: recall_at_1000 value: 91.834 - type: recall_at_3 value: 38.266 - type: recall_at_5 value: 44.287 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.407 - type: map_at_10 value: 25.185999999999996 - type: map_at_100 value: 26.533 - type: map_at_1000 value: 26.657999999999998 - type: map_at_3 value: 22.201999999999998 - type: map_at_5 value: 23.923 - type: mrr_at_1 value: 20.522000000000002 - type: mrr_at_10 value: 29.522 - type: mrr_at_100 value: 30.644 - type: mrr_at_1000 value: 30.713 - type: mrr_at_3 value: 26.679000000000002 - type: mrr_at_5 value: 28.483000000000004 - type: ndcg_at_1 value: 20.522000000000002 - type: ndcg_at_10 value: 30.656 - type: ndcg_at_100 value: 36.864999999999995 - type: ndcg_at_1000 value: 39.675 - type: ndcg_at_3 value: 25.319000000000003 - type: ndcg_at_5 value: 27.992 - type: precision_at_1 value: 20.522000000000002 - type: precision_at_10 value: 5.795999999999999 - type: precision_at_100 value: 1.027 - type: precision_at_1000 value: 0.13999999999999999 - type: precision_at_3 value: 12.396 - type: precision_at_5 value: 9.328 - type: recall_at_1 value: 16.407 - type: recall_at_10 value: 43.164 - type: recall_at_100 value: 69.695 - type: recall_at_1000 value: 89.41900000000001 - type: recall_at_3 value: 28.634999999999998 - type: recall_at_5 value: 35.308 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.473 - type: map_at_10 value: 41.676 - type: map_at_100 value: 43.120999999999995 - type: map_at_1000 value: 43.230000000000004 - type: map_at_3 value: 38.306000000000004 - type: map_at_5 value: 40.355999999999995 - type: mrr_at_1 value: 37.536 - type: mrr_at_10 value: 47.643 - type: mrr_at_100 value: 48.508 - type: mrr_at_1000 value: 48.551 - type: mrr_at_3 value: 45.348 - type: mrr_at_5 value: 46.744 - type: ndcg_at_1 value: 37.536 - type: ndcg_at_10 value: 47.823 - type: ndcg_at_100 value: 53.395 - type: ndcg_at_1000 value: 55.271 - type: ndcg_at_3 value: 42.768 - type: ndcg_at_5 value: 45.373000000000005 - type: precision_at_1 value: 37.536 - type: precision_at_10 value: 8.681 - type: precision_at_100 value: 1.34 - type: precision_at_1000 value: 0.165 - type: precision_at_3 value: 20.468 - type: precision_at_5 value: 14.495 - type: recall_at_1 value: 30.473 - type: recall_at_10 value: 60.092999999999996 - type: recall_at_100 value: 82.733 - type: recall_at_1000 value: 94.875 - type: recall_at_3 value: 45.734 - type: recall_at_5 value: 52.691 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 29.976000000000003 - type: map_at_10 value: 41.097 - type: map_at_100 value: 42.547000000000004 - type: map_at_1000 value: 42.659000000000006 - type: map_at_3 value: 37.251 - type: map_at_5 value: 39.493 - type: mrr_at_1 value: 37.557 - type: mrr_at_10 value: 46.605000000000004 - type: mrr_at_100 value: 47.487 - type: mrr_at_1000 value: 47.54 - type: mrr_at_3 value: 43.721 - type: mrr_at_5 value: 45.411 - type: ndcg_at_1 value: 37.557 - type: ndcg_at_10 value: 47.449000000000005 - type: ndcg_at_100 value: 53.052 - type: ndcg_at_1000 value: 55.010999999999996 - type: ndcg_at_3 value: 41.439 - type: ndcg_at_5 value: 44.292 - type: precision_at_1 value: 37.557 - type: precision_at_10 value: 8.847 - type: precision_at_100 value: 1.357 - type: precision_at_1000 value: 0.16999999999999998 - type: precision_at_3 value: 20.091 - type: precision_at_5 value: 14.384 - type: recall_at_1 value: 29.976000000000003 - type: recall_at_10 value: 60.99099999999999 - type: recall_at_100 value: 84.245 - type: recall_at_1000 value: 96.97200000000001 - type: recall_at_3 value: 43.794 - type: recall_at_5 value: 51.778999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.099166666666665 - type: map_at_10 value: 38.1365 - type: map_at_100 value: 39.44491666666667 - type: map_at_1000 value: 39.55858333333334 - type: map_at_3 value: 35.03641666666666 - type: map_at_5 value: 36.79833333333334 - type: mrr_at_1 value: 33.39966666666667 - type: mrr_at_10 value: 42.42583333333333 - type: mrr_at_100 value: 43.28575 - type: mrr_at_1000 value: 43.33741666666667 - type: mrr_at_3 value: 39.94975 - type: mrr_at_5 value: 41.41633333333334 - type: ndcg_at_1 value: 33.39966666666667 - type: ndcg_at_10 value: 43.81741666666667 - type: ndcg_at_100 value: 49.08166666666667 - type: ndcg_at_1000 value: 51.121166666666674 - type: ndcg_at_3 value: 38.73575 - type: ndcg_at_5 value: 41.18158333333333 - type: precision_at_1 value: 33.39966666666667 - type: precision_at_10 value: 7.738916666666667 - type: precision_at_100 value: 1.2265833333333331 - type: precision_at_1000 value: 0.15983333333333336 - type: precision_at_3 value: 17.967416666666665 - type: precision_at_5 value: 12.78675 - type: recall_at_1 value: 28.099166666666665 - type: recall_at_10 value: 56.27049999999999 - type: recall_at_100 value: 78.93291666666667 - type: recall_at_1000 value: 92.81608333333334 - type: recall_at_3 value: 42.09775 - type: recall_at_5 value: 48.42533333333334 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.663 - type: map_at_10 value: 30.377 - type: map_at_100 value: 31.426 - type: map_at_1000 value: 31.519000000000002 - type: map_at_3 value: 28.069 - type: map_at_5 value: 29.256999999999998 - type: mrr_at_1 value: 26.687 - type: mrr_at_10 value: 33.107 - type: mrr_at_100 value: 34.055 - type: mrr_at_1000 value: 34.117999999999995 - type: mrr_at_3 value: 31.058000000000003 - type: mrr_at_5 value: 32.14 - type: ndcg_at_1 value: 26.687 - type: ndcg_at_10 value: 34.615 - type: ndcg_at_100 value: 39.776 - type: ndcg_at_1000 value: 42.05 - type: ndcg_at_3 value: 30.322 - type: ndcg_at_5 value: 32.157000000000004 - type: precision_at_1 value: 26.687 - type: precision_at_10 value: 5.491 - type: precision_at_100 value: 0.877 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 13.139000000000001 - type: precision_at_5 value: 9.049 - type: recall_at_1 value: 23.663 - type: recall_at_10 value: 45.035 - type: recall_at_100 value: 68.554 - type: recall_at_1000 value: 85.077 - type: recall_at_3 value: 32.982 - type: recall_at_5 value: 37.688 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 17.403 - type: map_at_10 value: 25.197000000000003 - type: map_at_100 value: 26.355 - type: map_at_1000 value: 26.487 - type: map_at_3 value: 22.733 - type: map_at_5 value: 24.114 - type: mrr_at_1 value: 21.37 - type: mrr_at_10 value: 29.091 - type: mrr_at_100 value: 30.018 - type: mrr_at_1000 value: 30.096 - type: mrr_at_3 value: 26.887 - type: mrr_at_5 value: 28.157 - type: ndcg_at_1 value: 21.37 - type: ndcg_at_10 value: 30.026000000000003 - type: ndcg_at_100 value: 35.416 - type: ndcg_at_1000 value: 38.45 - type: ndcg_at_3 value: 25.764 - type: ndcg_at_5 value: 27.742 - type: precision_at_1 value: 21.37 - type: precision_at_10 value: 5.609 - type: precision_at_100 value: 0.9860000000000001 - type: precision_at_1000 value: 0.14300000000000002 - type: precision_at_3 value: 12.423 - type: precision_at_5 value: 9.009 - type: recall_at_1 value: 17.403 - type: recall_at_10 value: 40.573 - type: recall_at_100 value: 64.818 - type: recall_at_1000 value: 86.53699999999999 - type: recall_at_3 value: 28.493000000000002 - type: recall_at_5 value: 33.660000000000004 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.639 - type: map_at_10 value: 38.951 - type: map_at_100 value: 40.238 - type: map_at_1000 value: 40.327 - type: map_at_3 value: 35.842 - type: map_at_5 value: 37.617 - type: mrr_at_1 value: 33.769 - type: mrr_at_10 value: 43.088 - type: mrr_at_100 value: 44.03 - type: mrr_at_1000 value: 44.072 - type: mrr_at_3 value: 40.656 - type: mrr_at_5 value: 42.138999999999996 - type: ndcg_at_1 value: 33.769 - type: ndcg_at_10 value: 44.676 - type: ndcg_at_100 value: 50.416000000000004 - type: ndcg_at_1000 value: 52.227999999999994 - type: ndcg_at_3 value: 39.494 - type: ndcg_at_5 value: 42.013 - type: precision_at_1 value: 33.769 - type: precision_at_10 value: 7.668 - type: precision_at_100 value: 1.18 - type: precision_at_1000 value: 0.145 - type: precision_at_3 value: 18.221 - type: precision_at_5 value: 12.966 - type: recall_at_1 value: 28.639 - type: recall_at_10 value: 57.687999999999995 - type: recall_at_100 value: 82.541 - type: recall_at_1000 value: 94.896 - type: recall_at_3 value: 43.651 - type: recall_at_5 value: 49.925999999999995 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 29.57 - type: map_at_10 value: 40.004 - type: map_at_100 value: 41.75 - type: map_at_1000 value: 41.97 - type: map_at_3 value: 36.788 - type: map_at_5 value: 38.671 - type: mrr_at_1 value: 35.375 - type: mrr_at_10 value: 45.121 - type: mrr_at_100 value: 45.994 - type: mrr_at_1000 value: 46.04 - type: mrr_at_3 value: 42.227 - type: mrr_at_5 value: 43.995 - type: ndcg_at_1 value: 35.375 - type: ndcg_at_10 value: 46.392 - type: ndcg_at_100 value: 52.196 - type: ndcg_at_1000 value: 54.274 - type: ndcg_at_3 value: 41.163 - type: ndcg_at_5 value: 43.813 - type: precision_at_1 value: 35.375 - type: precision_at_10 value: 8.676 - type: precision_at_100 value: 1.678 - type: precision_at_1000 value: 0.253 - type: precision_at_3 value: 19.104 - type: precision_at_5 value: 13.913 - type: recall_at_1 value: 29.57 - type: recall_at_10 value: 58.779 - type: recall_at_100 value: 83.337 - type: recall_at_1000 value: 95.979 - type: recall_at_3 value: 44.005 - type: recall_at_5 value: 50.975 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 20.832 - type: map_at_10 value: 29.733999999999998 - type: map_at_100 value: 30.727 - type: map_at_1000 value: 30.843999999999998 - type: map_at_3 value: 26.834999999999997 - type: map_at_5 value: 28.555999999999997 - type: mrr_at_1 value: 22.921 - type: mrr_at_10 value: 31.791999999999998 - type: mrr_at_100 value: 32.666000000000004 - type: mrr_at_1000 value: 32.751999999999995 - type: mrr_at_3 value: 29.144 - type: mrr_at_5 value: 30.622 - type: ndcg_at_1 value: 22.921 - type: ndcg_at_10 value: 34.915 - type: ndcg_at_100 value: 39.744 - type: ndcg_at_1000 value: 42.407000000000004 - type: ndcg_at_3 value: 29.421000000000003 - type: ndcg_at_5 value: 32.211 - type: precision_at_1 value: 22.921 - type: precision_at_10 value: 5.675 - type: precision_at_100 value: 0.872 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 12.753999999999998 - type: precision_at_5 value: 9.353 - type: recall_at_1 value: 20.832 - type: recall_at_10 value: 48.795 - type: recall_at_100 value: 70.703 - type: recall_at_1000 value: 90.187 - type: recall_at_3 value: 34.455000000000005 - type: recall_at_5 value: 40.967 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 10.334 - type: map_at_10 value: 19.009999999999998 - type: map_at_100 value: 21.129 - type: map_at_1000 value: 21.328 - type: map_at_3 value: 15.152 - type: map_at_5 value: 17.084 - type: mrr_at_1 value: 23.453 - type: mrr_at_10 value: 36.099 - type: mrr_at_100 value: 37.069 - type: mrr_at_1000 value: 37.104 - type: mrr_at_3 value: 32.096000000000004 - type: mrr_at_5 value: 34.451 - type: ndcg_at_1 value: 23.453 - type: ndcg_at_10 value: 27.739000000000004 - type: ndcg_at_100 value: 35.836 - type: ndcg_at_1000 value: 39.242 - type: ndcg_at_3 value: 21.263 - type: ndcg_at_5 value: 23.677 - type: precision_at_1 value: 23.453 - type: precision_at_10 value: 9.199 - type: precision_at_100 value: 1.791 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 16.2 - type: precision_at_5 value: 13.147 - type: recall_at_1 value: 10.334 - type: recall_at_10 value: 35.177 - type: recall_at_100 value: 63.009 - type: recall_at_1000 value: 81.938 - type: recall_at_3 value: 19.914 - type: recall_at_5 value: 26.077 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 8.212 - type: map_at_10 value: 17.386 - type: map_at_100 value: 24.234 - type: map_at_1000 value: 25.724999999999998 - type: map_at_3 value: 12.727 - type: map_at_5 value: 14.785 - type: mrr_at_1 value: 59.25 - type: mrr_at_10 value: 68.687 - type: mrr_at_100 value: 69.133 - type: mrr_at_1000 value: 69.14099999999999 - type: mrr_at_3 value: 66.917 - type: mrr_at_5 value: 67.742 - type: ndcg_at_1 value: 48.625 - type: ndcg_at_10 value: 36.675999999999995 - type: ndcg_at_100 value: 41.543 - type: ndcg_at_1000 value: 49.241 - type: ndcg_at_3 value: 41.373 - type: ndcg_at_5 value: 38.707 - type: precision_at_1 value: 59.25 - type: precision_at_10 value: 28.525 - type: precision_at_100 value: 9.027000000000001 - type: precision_at_1000 value: 1.8339999999999999 - type: precision_at_3 value: 44.833 - type: precision_at_5 value: 37.35 - type: recall_at_1 value: 8.212 - type: recall_at_10 value: 23.188 - type: recall_at_100 value: 48.613 - type: recall_at_1000 value: 73.093 - type: recall_at_3 value: 14.419 - type: recall_at_5 value: 17.798 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 52.725 - type: f1 value: 46.50743309855908 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 55.086 - type: map_at_10 value: 66.914 - type: map_at_100 value: 67.321 - type: map_at_1000 value: 67.341 - type: map_at_3 value: 64.75800000000001 - type: map_at_5 value: 66.189 - type: mrr_at_1 value: 59.28600000000001 - type: mrr_at_10 value: 71.005 - type: mrr_at_100 value: 71.304 - type: mrr_at_1000 value: 71.313 - type: mrr_at_3 value: 69.037 - type: mrr_at_5 value: 70.35 - type: ndcg_at_1 value: 59.28600000000001 - type: ndcg_at_10 value: 72.695 - type: ndcg_at_100 value: 74.432 - type: ndcg_at_1000 value: 74.868 - type: ndcg_at_3 value: 68.72200000000001 - type: ndcg_at_5 value: 71.081 - type: precision_at_1 value: 59.28600000000001 - type: precision_at_10 value: 9.499 - type: precision_at_100 value: 1.052 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 27.503 - type: precision_at_5 value: 17.854999999999997 - type: recall_at_1 value: 55.086 - type: recall_at_10 value: 86.453 - type: recall_at_100 value: 94.028 - type: recall_at_1000 value: 97.052 - type: recall_at_3 value: 75.821 - type: recall_at_5 value: 81.6 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 22.262999999999998 - type: map_at_10 value: 37.488 - type: map_at_100 value: 39.498 - type: map_at_1000 value: 39.687 - type: map_at_3 value: 32.529 - type: map_at_5 value: 35.455 - type: mrr_at_1 value: 44.907000000000004 - type: mrr_at_10 value: 53.239000000000004 - type: mrr_at_100 value: 54.086 - type: mrr_at_1000 value: 54.122 - type: mrr_at_3 value: 51.235 - type: mrr_at_5 value: 52.415 - type: ndcg_at_1 value: 44.907000000000004 - type: ndcg_at_10 value: 45.446 - type: ndcg_at_100 value: 52.429 - type: ndcg_at_1000 value: 55.169000000000004 - type: ndcg_at_3 value: 41.882000000000005 - type: ndcg_at_5 value: 43.178 - type: precision_at_1 value: 44.907000000000004 - type: precision_at_10 value: 12.931999999999999 - type: precision_at_100 value: 2.025 - type: precision_at_1000 value: 0.248 - type: precision_at_3 value: 28.652 - type: precision_at_5 value: 21.204 - type: recall_at_1 value: 22.262999999999998 - type: recall_at_10 value: 52.447 - type: recall_at_100 value: 78.045 - type: recall_at_1000 value: 94.419 - type: recall_at_3 value: 38.064 - type: recall_at_5 value: 44.769 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 32.519 - type: map_at_10 value: 45.831 - type: map_at_100 value: 46.815 - type: map_at_1000 value: 46.899 - type: map_at_3 value: 42.836 - type: map_at_5 value: 44.65 - type: mrr_at_1 value: 65.037 - type: mrr_at_10 value: 72.16 - type: mrr_at_100 value: 72.51100000000001 - type: mrr_at_1000 value: 72.53 - type: mrr_at_3 value: 70.682 - type: mrr_at_5 value: 71.54599999999999 - type: ndcg_at_1 value: 65.037 - type: ndcg_at_10 value: 55.17999999999999 - type: ndcg_at_100 value: 58.888 - type: ndcg_at_1000 value: 60.648 - type: ndcg_at_3 value: 50.501 - type: ndcg_at_5 value: 52.977 - type: precision_at_1 value: 65.037 - type: precision_at_10 value: 11.530999999999999 - type: precision_at_100 value: 1.4460000000000002 - type: precision_at_1000 value: 0.168 - type: precision_at_3 value: 31.483 - type: precision_at_5 value: 20.845 - type: recall_at_1 value: 32.519 - type: recall_at_10 value: 57.657000000000004 - type: recall_at_100 value: 72.30199999999999 - type: recall_at_1000 value: 84.024 - type: recall_at_3 value: 47.225 - type: recall_at_5 value: 52.113 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 88.3168 - type: ap value: 83.80165516037135 - type: f1 value: 88.29942471066407 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 20.724999999999998 - type: map_at_10 value: 32.736 - type: map_at_100 value: 33.938 - type: map_at_1000 value: 33.991 - type: map_at_3 value: 28.788000000000004 - type: map_at_5 value: 31.016 - type: mrr_at_1 value: 21.361 - type: mrr_at_10 value: 33.323 - type: mrr_at_100 value: 34.471000000000004 - type: mrr_at_1000 value: 34.518 - type: mrr_at_3 value: 29.453000000000003 - type: mrr_at_5 value: 31.629 - type: ndcg_at_1 value: 21.361 - type: ndcg_at_10 value: 39.649 - type: ndcg_at_100 value: 45.481 - type: ndcg_at_1000 value: 46.775 - type: ndcg_at_3 value: 31.594 - type: ndcg_at_5 value: 35.543 - type: precision_at_1 value: 21.361 - type: precision_at_10 value: 6.3740000000000006 - type: precision_at_100 value: 0.931 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 13.514999999999999 - type: precision_at_5 value: 10.100000000000001 - type: recall_at_1 value: 20.724999999999998 - type: recall_at_10 value: 61.034 - type: recall_at_100 value: 88.062 - type: recall_at_1000 value: 97.86399999999999 - type: recall_at_3 value: 39.072 - type: recall_at_5 value: 48.53 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.8919288645691 - type: f1 value: 93.57059586398059 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 67.97993616051072 - type: f1 value: 48.244319183606535 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.90047074646941 - type: f1 value: 66.48999056063725 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.34566240753195 - type: f1 value: 73.54164154290658 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 34.21866934757011 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 32.000936217235534 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.68189362520352 - type: mrr value: 32.69603637784303 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.078 - type: map_at_10 value: 12.671 - type: map_at_100 value: 16.291 - type: map_at_1000 value: 17.855999999999998 - type: map_at_3 value: 9.610000000000001 - type: map_at_5 value: 11.152 - type: mrr_at_1 value: 43.963 - type: mrr_at_10 value: 53.173 - type: mrr_at_100 value: 53.718999999999994 - type: mrr_at_1000 value: 53.756 - type: mrr_at_3 value: 50.980000000000004 - type: mrr_at_5 value: 52.42 - type: ndcg_at_1 value: 42.415000000000006 - type: ndcg_at_10 value: 34.086 - type: ndcg_at_100 value: 32.545 - type: ndcg_at_1000 value: 41.144999999999996 - type: ndcg_at_3 value: 39.434999999999995 - type: ndcg_at_5 value: 37.888 - type: precision_at_1 value: 43.653 - type: precision_at_10 value: 25.014999999999997 - type: precision_at_100 value: 8.594 - type: precision_at_1000 value: 2.169 - type: precision_at_3 value: 37.049 - type: precision_at_5 value: 33.065 - type: recall_at_1 value: 6.078 - type: recall_at_10 value: 16.17 - type: recall_at_100 value: 34.512 - type: recall_at_1000 value: 65.447 - type: recall_at_3 value: 10.706 - type: recall_at_5 value: 13.158 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 27.378000000000004 - type: map_at_10 value: 42.178 - type: map_at_100 value: 43.32 - type: map_at_1000 value: 43.358000000000004 - type: map_at_3 value: 37.474000000000004 - type: map_at_5 value: 40.333000000000006 - type: mrr_at_1 value: 30.823 - type: mrr_at_10 value: 44.626 - type: mrr_at_100 value: 45.494 - type: mrr_at_1000 value: 45.519 - type: mrr_at_3 value: 40.585 - type: mrr_at_5 value: 43.146 - type: ndcg_at_1 value: 30.794 - type: ndcg_at_10 value: 50.099000000000004 - type: ndcg_at_100 value: 54.900999999999996 - type: ndcg_at_1000 value: 55.69499999999999 - type: ndcg_at_3 value: 41.238 - type: ndcg_at_5 value: 46.081 - type: precision_at_1 value: 30.794 - type: precision_at_10 value: 8.549 - type: precision_at_100 value: 1.124 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 18.926000000000002 - type: precision_at_5 value: 14.16 - type: recall_at_1 value: 27.378000000000004 - type: recall_at_10 value: 71.842 - type: recall_at_100 value: 92.565 - type: recall_at_1000 value: 98.402 - type: recall_at_3 value: 49.053999999999995 - type: recall_at_5 value: 60.207 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 70.557 - type: map_at_10 value: 84.729 - type: map_at_100 value: 85.369 - type: map_at_1000 value: 85.382 - type: map_at_3 value: 81.72 - type: map_at_5 value: 83.613 - type: mrr_at_1 value: 81.3 - type: mrr_at_10 value: 87.488 - type: mrr_at_100 value: 87.588 - type: mrr_at_1000 value: 87.589 - type: mrr_at_3 value: 86.53 - type: mrr_at_5 value: 87.18599999999999 - type: ndcg_at_1 value: 81.28999999999999 - type: ndcg_at_10 value: 88.442 - type: ndcg_at_100 value: 89.637 - type: ndcg_at_1000 value: 89.70700000000001 - type: ndcg_at_3 value: 85.55199999999999 - type: ndcg_at_5 value: 87.154 - type: precision_at_1 value: 81.28999999999999 - type: precision_at_10 value: 13.489999999999998 - type: precision_at_100 value: 1.54 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.553 - type: precision_at_5 value: 24.708 - type: recall_at_1 value: 70.557 - type: recall_at_10 value: 95.645 - type: recall_at_100 value: 99.693 - type: recall_at_1000 value: 99.995 - type: recall_at_3 value: 87.359 - type: recall_at_5 value: 91.89699999999999 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 63.65060114776209 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 64.63271250680617 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.263 - type: map_at_10 value: 10.801 - type: map_at_100 value: 12.888 - type: map_at_1000 value: 13.224 - type: map_at_3 value: 7.362 - type: map_at_5 value: 9.149000000000001 - type: mrr_at_1 value: 21 - type: mrr_at_10 value: 31.416 - type: mrr_at_100 value: 32.513 - type: mrr_at_1000 value: 32.58 - type: mrr_at_3 value: 28.116999999999997 - type: mrr_at_5 value: 29.976999999999997 - type: ndcg_at_1 value: 21 - type: ndcg_at_10 value: 18.551000000000002 - type: ndcg_at_100 value: 26.657999999999998 - type: ndcg_at_1000 value: 32.485 - type: ndcg_at_3 value: 16.834 - type: ndcg_at_5 value: 15.204999999999998 - type: precision_at_1 value: 21 - type: precision_at_10 value: 9.84 - type: precision_at_100 value: 2.16 - type: precision_at_1000 value: 0.35500000000000004 - type: precision_at_3 value: 15.667 - type: precision_at_5 value: 13.62 - type: recall_at_1 value: 4.263 - type: recall_at_10 value: 19.922 - type: recall_at_100 value: 43.808 - type: recall_at_1000 value: 72.14500000000001 - type: recall_at_3 value: 9.493 - type: recall_at_5 value: 13.767999999999999 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_spearman value: 81.27446313317233 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_spearman value: 76.27963301217527 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_spearman value: 88.18495048450949 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_spearman value: 81.91982338692046 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_spearman value: 89.00896818385291 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_spearman value: 85.48814644586132 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 90.30116926966582 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_spearman value: 67.74132963032342 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_spearman value: 86.87741355780479 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 82.0019012295875 - type: mrr value: 94.70267024188593 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 50.05 - type: map_at_10 value: 59.36 - type: map_at_100 value: 59.967999999999996 - type: map_at_1000 value: 60.023 - type: map_at_3 value: 56.515 - type: map_at_5 value: 58.272999999999996 - type: mrr_at_1 value: 53 - type: mrr_at_10 value: 61.102000000000004 - type: mrr_at_100 value: 61.476 - type: mrr_at_1000 value: 61.523 - type: mrr_at_3 value: 58.778 - type: mrr_at_5 value: 60.128 - type: ndcg_at_1 value: 53 - type: ndcg_at_10 value: 64.43100000000001 - type: ndcg_at_100 value: 66.73599999999999 - type: ndcg_at_1000 value: 68.027 - type: ndcg_at_3 value: 59.279 - type: ndcg_at_5 value: 61.888 - type: precision_at_1 value: 53 - type: precision_at_10 value: 8.767 - type: precision_at_100 value: 1.01 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 23.444000000000003 - type: precision_at_5 value: 15.667 - type: recall_at_1 value: 50.05 - type: recall_at_10 value: 78.511 - type: recall_at_100 value: 88.5 - type: recall_at_1000 value: 98.333 - type: recall_at_3 value: 64.117 - type: recall_at_5 value: 70.867 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.72178217821782 - type: cos_sim_ap value: 93.0728601593541 - type: cos_sim_f1 value: 85.6727976766699 - type: cos_sim_precision value: 83.02063789868667 - type: cos_sim_recall value: 88.5 - type: dot_accuracy value: 99.72178217821782 - type: dot_ap value: 93.07287396168348 - type: dot_f1 value: 85.6727976766699 - type: dot_precision value: 83.02063789868667 - type: dot_recall value: 88.5 - type: euclidean_accuracy value: 99.72178217821782 - type: euclidean_ap value: 93.07285657982895 - type: euclidean_f1 value: 85.6727976766699 - type: euclidean_precision value: 83.02063789868667 - type: euclidean_recall value: 88.5 - type: manhattan_accuracy value: 99.72475247524753 - type: manhattan_ap value: 93.02792973059809 - type: manhattan_f1 value: 85.7727737973388 - type: manhattan_precision value: 87.84067085953879 - type: manhattan_recall value: 83.8 - type: max_accuracy value: 99.72475247524753 - type: max_ap value: 93.07287396168348 - type: max_f1 value: 85.7727737973388 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 68.77583615550819 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 36.151636938606956 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 52.16607939471187 - type: mrr value: 52.95172046091163 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.314646669495666 - type: cos_sim_spearman value: 31.83562491439455 - type: dot_pearson value: 31.314590842874157 - type: dot_spearman value: 31.83363065810437 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.198 - type: map_at_10 value: 1.3010000000000002 - type: map_at_100 value: 7.2139999999999995 - type: map_at_1000 value: 20.179 - type: map_at_3 value: 0.528 - type: map_at_5 value: 0.8019999999999999 - type: mrr_at_1 value: 72 - type: mrr_at_10 value: 83.39999999999999 - type: mrr_at_100 value: 83.39999999999999 - type: mrr_at_1000 value: 83.39999999999999 - type: mrr_at_3 value: 81.667 - type: mrr_at_5 value: 83.06700000000001 - type: ndcg_at_1 value: 66 - type: ndcg_at_10 value: 58.059000000000005 - type: ndcg_at_100 value: 44.316 - type: ndcg_at_1000 value: 43.147000000000006 - type: ndcg_at_3 value: 63.815999999999995 - type: ndcg_at_5 value: 63.005 - type: precision_at_1 value: 72 - type: precision_at_10 value: 61.4 - type: precision_at_100 value: 45.62 - type: precision_at_1000 value: 19.866 - type: precision_at_3 value: 70 - type: precision_at_5 value: 68.8 - type: recall_at_1 value: 0.198 - type: recall_at_10 value: 1.517 - type: recall_at_100 value: 10.587 - type: recall_at_1000 value: 41.233 - type: recall_at_3 value: 0.573 - type: recall_at_5 value: 0.907 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.894 - type: map_at_10 value: 8.488999999999999 - type: map_at_100 value: 14.445 - type: map_at_1000 value: 16.078 - type: map_at_3 value: 4.589 - type: map_at_5 value: 6.019 - type: mrr_at_1 value: 22.448999999999998 - type: mrr_at_10 value: 39.82 - type: mrr_at_100 value: 40.752 - type: mrr_at_1000 value: 40.771 - type: mrr_at_3 value: 34.354 - type: mrr_at_5 value: 37.721 - type: ndcg_at_1 value: 19.387999999999998 - type: ndcg_at_10 value: 21.563 - type: ndcg_at_100 value: 33.857 - type: ndcg_at_1000 value: 46.199 - type: ndcg_at_3 value: 22.296 - type: ndcg_at_5 value: 21.770999999999997 - type: precision_at_1 value: 22.448999999999998 - type: precision_at_10 value: 19.796 - type: precision_at_100 value: 7.142999999999999 - type: precision_at_1000 value: 1.541 - type: precision_at_3 value: 24.490000000000002 - type: precision_at_5 value: 22.448999999999998 - type: recall_at_1 value: 1.894 - type: recall_at_10 value: 14.931 - type: recall_at_100 value: 45.524 - type: recall_at_1000 value: 83.243 - type: recall_at_3 value: 5.712 - type: recall_at_5 value: 8.386000000000001 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.049 - type: ap value: 13.85116971310922 - type: f1 value: 54.37504302487686 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 64.1312959818902 - type: f1 value: 64.11413877009383 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 54.13103431861502 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.327889372355 - type: cos_sim_ap value: 77.42059895975699 - type: cos_sim_f1 value: 71.02706903250873 - type: cos_sim_precision value: 69.75324344950394 - type: cos_sim_recall value: 72.34828496042216 - type: dot_accuracy value: 87.327889372355 - type: dot_ap value: 77.4209479346677 - type: dot_f1 value: 71.02706903250873 - type: dot_precision value: 69.75324344950394 - type: dot_recall value: 72.34828496042216 - type: euclidean_accuracy value: 87.327889372355 - type: euclidean_ap value: 77.42096495861037 - type: euclidean_f1 value: 71.02706903250873 - type: euclidean_precision value: 69.75324344950394 - type: euclidean_recall value: 72.34828496042216 - type: manhattan_accuracy value: 87.31000774870358 - type: manhattan_ap value: 77.38930750711619 - type: manhattan_f1 value: 71.07935314027831 - type: manhattan_precision value: 67.70957726295677 - type: manhattan_recall value: 74.80211081794195 - type: max_accuracy value: 87.327889372355 - type: max_ap value: 77.42096495861037 - type: max_f1 value: 71.07935314027831 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.58939729110878 - type: cos_sim_ap value: 87.17594155025475 - type: cos_sim_f1 value: 79.21146953405018 - type: cos_sim_precision value: 76.8918527109307 - type: cos_sim_recall value: 81.67539267015707 - type: dot_accuracy value: 89.58939729110878 - type: dot_ap value: 87.17593963273593 - type: dot_f1 value: 79.21146953405018 - type: dot_precision value: 76.8918527109307 - type: dot_recall value: 81.67539267015707 - type: euclidean_accuracy value: 89.58939729110878 - type: euclidean_ap value: 87.17592466925834 - type: euclidean_f1 value: 79.21146953405018 - type: euclidean_precision value: 76.8918527109307 - type: euclidean_recall value: 81.67539267015707 - type: manhattan_accuracy value: 89.62626615438352 - type: manhattan_ap value: 87.16589873161546 - type: manhattan_f1 value: 79.25143598295348 - type: manhattan_precision value: 76.39494177323712 - type: manhattan_recall value: 82.32984293193716 - type: max_accuracy value: 89.62626615438352 - type: max_ap value: 87.17594155025475 - type: max_f1 value: 79.25143598295348 duplicated_from: hkunlp/instructor-large --- # hkunlp/instructor-large We introduce **Instructor**👨‍🏫, an instruction-finetuned text embedding model that can generate text embeddings tailored to any task (e.g., classification, retrieval, clustering, text evaluation, etc.) and domains (e.g., science, finance, etc.) ***by simply providing the task instruction, without any finetuning***. Instructor👨‍ achieves sota on 70 diverse embedding tasks ([MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard))! The model is easy to use with **our customized** `sentence-transformer` library. For more details, check out [our paper](https://arxiv.org/abs/2212.09741) and [project page](https://instructor-embedding.github.io/)! **************************** **Updates** **************************** * 12/28: We released a new [checkpoint](https://huggingface.co/hkunlp/instructor-large) trained with hard negatives, which gives better performance. * 12/21: We released our [paper](https://arxiv.org/abs/2212.09741), [code](https://github.com/HKUNLP/instructor-embedding), [checkpoint](https://huggingface.co/hkunlp/instructor-large) and [project page](https://instructor-embedding.github.io/)! Check them out! ## Quick start <hr /> ## Installation ```bash pip install InstructorEmbedding ``` ## Compute your customized embeddings Then you can use the model like this to calculate domain-specific and task-aware embeddings: ```python from InstructorEmbedding import INSTRUCTOR model = INSTRUCTOR('hkunlp/instructor-large') sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments" instruction = "Represent the Science title:" embeddings = model.encode([[instruction,sentence]]) print(embeddings) ``` ## Use cases <hr /> ## Calculate embeddings for your customized texts If you want to calculate customized embeddings for specific sentences, you may follow the unified template to write instructions: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Represent the `domain` `text_type` for `task_objective`: * `domain` is optional, and it specifies the domain of the text, e.g., science, finance, medicine, etc. * `text_type` is required, and it specifies the encoding unit, e.g., sentence, document, paragraph, etc. * `task_objective` is optional, and it specifies the objective of embedding, e.g., retrieve a document, classify the sentence, etc. ## Calculate Sentence similarities You can further use the model to compute similarities between two groups of sentences, with **customized embeddings**. ```python from sklearn.metrics.pairwise import cosine_similarity sentences_a = [['Represent the Science sentence: ','Parton energy loss in QCD matter'], ['Represent the Financial statement: ','The Federal Reserve on Wednesday raised its benchmark interest rate.']] sentences_b = [['Represent the Science sentence: ','The Chiral Phase Transition in Dissipative Dynamics'], ['Represent the Financial statement: ','The funds rose less than 0.5 per cent on Friday']] embeddings_a = model.encode(sentences_a) embeddings_b = model.encode(sentences_b) similarities = cosine_similarity(embeddings_a,embeddings_b) print(similarities) ``` ## Information Retrieval You can also use **customized embeddings** for information retrieval. ```python import numpy as np from sklearn.metrics.pairwise import cosine_similarity query = [['Represent the Wikipedia question for retrieving supporting documents: ','where is the food stored in a yam plant']] corpus = [['Represent the Wikipedia document for retrieval: ','Capitalism has been dominant in the Western world since the end of feudalism, but most feel[who?] that the term "mixed economies" more precisely describes most contemporary economies, due to their containing both private-owned and state-owned enterprises. In capitalism, prices determine the demand-supply scale. For example, higher demand for certain goods and services lead to higher prices and lower demand for certain goods lead to lower prices.'], ['Represent the Wikipedia document for retrieval: ',"The disparate impact theory is especially controversial under the Fair Housing Act because the Act regulates many activities relating to housing, insurance, and mortgage loans—and some scholars have argued that the theory's use under the Fair Housing Act, combined with extensions of the Community Reinvestment Act, contributed to rise of sub-prime lending and the crash of the U.S. housing market and ensuing global economic recession"], ['Represent the Wikipedia document for retrieval: ','Disparate impact in United States labor law refers to practices in employment, housing, and other areas that adversely affect one group of people of a protected characteristic more than another, even though rules applied by employers or landlords are formally neutral. Although the protected classes vary by statute, most federal civil rights laws protect based on race, color, religion, national origin, and sex as protected traits, and some laws include disability status and other traits as well.']] query_embeddings = model.encode(query) corpus_embeddings = model.encode(corpus) similarities = cosine_similarity(query_embeddings,corpus_embeddings) retrieved_doc_id = np.argmax(similarities) print(retrieved_doc_id) ``` ## Clustering Use **customized embeddings** for clustering texts in groups. ```python import sklearn.cluster sentences = [['Represent the Medicine sentence for clustering: ','Dynamical Scalar Degree of Freedom in Horava-Lifshitz Gravity'], ['Represent the Medicine sentence for clustering: ','Comparison of Atmospheric Neutrino Flux Calculations at Low Energies'], ['Represent the Medicine sentence for clustering: ','Fermion Bags in the Massive Gross-Neveu Model'], ['Represent the Medicine sentence for clustering: ',"QCD corrections to Associated t-tbar-H production at the Tevatron"], ['Represent the Medicine sentence for clustering: ','A New Analysis of the R Measurements: Resonance Parameters of the Higher, Vector States of Charmonium']] embeddings = model.encode(sentences) clustering_model = sklearn.cluster.MiniBatchKMeans(n_clusters=2) clustering_model.fit(embeddings) cluster_assignment = clustering_model.labels_ print(cluster_assignment) ```
ufal/byt5-small-multilexnorm2021-en
ufal
2023-06-21T19:42:07Z
16
0
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "lexical normalization", "en", "dataset:mc4", "dataset:wikipedia", "dataset:multilexnorm", "arxiv:2105.13626", "arxiv:1907.06292", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - mc4 - wikipedia - multilexnorm tags: - lexical normalization license: apache-2.0 --- # Fine-tuned ByT5-small for MultiLexNorm (English version) ![model image](https://github.com/ufal/multilexnorm2021/raw/master/img/overall.png) This is the official release of the fine-tuned models for **the winning entry** to the [*W-NUT 2021: Multilingual Lexical Normalization (MultiLexNorm)* shared task](https://noisy-text.github.io/2021/multi-lexnorm.html), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages. Our system is based on [ByT5](https://arxiv.org/abs/2105.13626), which we first pre-train on synthetic data and then fine-tune on authentic normalization data. It achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. In addition to these fine-tuned models, we also release the source files on [GitHub](https://github.com/ufal/multilexnorm2021) and an interactive demo on [Google Colab](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing). ## How to use The model was *not* fine-tuned in a standard sentence-to-sentence setting – instead, it was tailored to the token-to-token definition of MultiLexNorm data. Please refer to [**the interactive demo on Colab notebook**](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing) to learn how to use these models. ## How to cite ```bibtex @inproceedings{wnut-ufal, title= "{ÚFAL} at {MultiLexNorm} 2021: Improving Multilingual Lexical Normalization by Fine-tuning {ByT5}", author = "Samuel, David and Straka, Milan", booktitle = "Proceedings of the 7th Workshop on Noisy User-generated Text (W-NUT 2021)", year = "2021", publisher = "Association for Computational Linguistics", address = "Punta Cana, Dominican Republic" } ``` ## ByT5 - Small ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-small). ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task. ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-small` significantly outperforms [mt5-small](https://huggingface.co/google/mt5-small) on [TweetQA](https://arxiv.org/abs/1907.06292). Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
rd124/distilbert-base-uncased-finetuned-imdb-v2
rd124
2023-06-21T19:36:28Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-06-21T19:24:19Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb-v2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.3723 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6273 | 1.0 | 381 | 2.4473 | | 2.5148 | 2.0 | 762 | 2.3930 | | 2.4786 | 3.0 | 1143 | 2.3852 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
breadlicker45/llama-test
breadlicker45
2023-06-21T19:32:17Z
161
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-20T12:31:51Z
this is fine-tuned/trained on nothing, DO NOT DOWNLOAD
keremnazliel/distilbert_squad_for_musique_7
keremnazliel
2023-06-21T19:24:41Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-06-21T19:11:11Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert_squad_for_musique_7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_squad_for_musique_7 This model is a fine-tuned version of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.5452 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
fatzetob/Ponti_Object_Classification
fatzetob
2023-06-21T19:24:07Z
1
0
tf-keras
[ "tf-keras", "vgg16", "image-classification", "region:us" ]
image-classification
2023-06-13T08:09:07Z
--- pipeline_tag: image-classification ---
zslrmhb/Reinforce-Cartpole-v1
zslrmhb
2023-06-21T19:20:17Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-06-21T19:19:37Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Cartpole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 495.24 +/- 47.36 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
keremnazliel/distilbert_squad_for_musique_6
keremnazliel
2023-06-21T19:05:34Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-06-21T18:39:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert_squad_for_musique_6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_squad_for_musique_6 This model is a fine-tuned version of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.1 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
kchen621/Reinforce-Pixelcopter-PLE-v0
kchen621
2023-06-21T19:00:37Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-06-21T16:04:33Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 30.30 +/- 32.28 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
S3S3/ppo-LunarLander-v2.2
S3S3
2023-06-21T18:53:40Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-21T18:53:21Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 283.11 +/- 22.06 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
koreadaeil/my_awesome_qa_model
koreadaeil
2023-06-21T18:53:31Z
63
0
transformers
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-06-21T17:53:20Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: koreadaeil/my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # koreadaeil/my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 5.8709 - Validation Loss: 5.8422 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 5.9555 | 5.8683 | 0 | | 5.9065 | 5.8422 | 1 | | 5.8709 | 5.8422 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.0 - Tokenizers 0.13.3
DunnBC22/codebert-base-mlm-Malicious_URLs
DunnBC22
2023-06-21T18:37:32Z
11
1
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-21T14:47:04Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: codebert-base-mlm-Malicious_URLs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codebert-base-mlm-Malicious_URLs This model is a fine-tuned version of [microsoft/codebert-base-mlm](https://huggingface.co/microsoft/codebert-base-mlm) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7442 - Accuracy: 0.7322 - Weighted f1: 0.6538 - Micro f1: 0.7322 - Macro f1: 0.4303 - Weighted recall: 0.7322 - Micro recall: 0.7322 - Macro recall: 0.4233 - Weighted precision: 0.6314 - Micro precision: 0.7322 - Macro precision: 0.6034 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
shahafw/a2c-PandaReachDense-v2
shahafw
2023-06-21T18:32:17Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-10T21:59:12Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -2.76 +/- 0.71 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
John1561/Web_Ui_Stable_Diffusion
John1561
2023-06-21T17:54:49Z
0
0
null
[ "region:us" ]
null
2023-06-21T17:53:38Z
# Stable Diffusion Webui Bot With Telegram - This is an open gettokensource project, no charges are allowed! - Owner use `/ 30` to get 30days token - Recommended Stable Diffusion Webui Start Command Args `export COMMANDLINE_ARGS="--api --no-hashing --skip-torch-cuda-test --skip-version-check --disable-nan-check --no-download-sd-model --no-half-controlnet --upcast-sampling --no-half-vae --opt-sdp-attention --disable-safe-unpickle --lowram --opt-split-attention --opt-channelslast --deepdanbooru"` - The necessary extensions - `https://github.com/zijiren233/sd-webui-controlnet`
swl-models/dvArch-Exterior
swl-models
2023-06-21T17:54:31Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-21T17:49:14Z
--- license: creativeml-openrail-m ---
keremnazliel/distilbert_squad_for_musique_4
keremnazliel
2023-06-21T17:51:25Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-06-21T17:48:15Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert_squad_for_musique_4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_squad_for_musique_4 This model is a fine-tuned version of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.5452 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
swl-models/Cetus-Mix-v2
swl-models
2023-06-21T17:51:05Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-21T17:44:12Z
--- license: creativeml-openrail-m ---
openlamm/epcl_vit-L_256tokens
openlamm
2023-06-21T17:49:43Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2023-06-20T18:30:31Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [OpenLAMM] - **Model type:** [Pytorch] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [FrozenCLIP] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> ScanNet [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
swl-models/Cetus-Mix-CodaEdition
swl-models
2023-06-21T17:37:37Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-21T17:30:16Z
--- license: creativeml-openrail-m ---
bri25yu/wmt19-ende-t5-small
bri25yu
2023-06-21T17:06:55Z
19
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "dataset:wmt19", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-06-14T04:11:36Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wmt19 metrics: - bleu model-index: - name: wmt19-ende-t5-small results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt19 type: wmt19 config: de-en split: validation args: de-en metrics: - name: Bleu type: bleu value: 16.085214160195623 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wmt19-ende-t5-small This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt19 dataset. It achieves the following results on the evaluation set: - Loss: 1.5150 - Bleu: 16.0852 - Brevity Penalty: 0.5512 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 256 - eval_batch_size: 512 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Brevity Penalty | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:---------------:| | 2.7369 | 0.01 | 100 | 2.0018 | 9.0851 | 0.5107 | | 3.3896 | 0.02 | 200 | 1.9108 | 9.9970 | 0.5127 | | 3.0442 | 0.03 | 300 | 1.8627 | 10.7670 | 0.5245 | | 2.5136 | 0.04 | 400 | 1.8244 | 10.9280 | 0.5132 | | 2.4092 | 0.05 | 500 | 1.7951 | 11.4717 | 0.5260 | | 3.2441 | 0.06 | 600 | 1.7736 | 11.7350 | 0.5197 | | 2.6997 | 0.07 | 700 | 1.7563 | 12.0741 | 0.5260 | | 2.5072 | 0.08 | 800 | 1.7416 | 12.3735 | 0.5283 | | 2.3788 | 0.09 | 900 | 1.7267 | 12.4288 | 0.5285 | | 2.3533 | 0.1 | 1000 | 1.7247 | 12.4395 | 0.5249 | | 2.2911 | 0.11 | 1100 | 1.7078 | 12.3887 | 0.5201 | | 2.3949 | 0.12 | 1200 | 1.6997 | 12.8109 | 0.5288 | | 2.2343 | 0.13 | 1300 | 1.6930 | 12.8213 | 0.5283 | | 2.2525 | 0.14 | 1400 | 1.6851 | 13.1221 | 0.5285 | | 2.2604 | 0.15 | 1500 | 1.6795 | 13.0896 | 0.5261 | | 2.3146 | 0.16 | 1600 | 1.6723 | 13.1741 | 0.5291 | | 2.5767 | 0.17 | 1700 | 1.6596 | 13.4224 | 0.5248 | | 2.698 | 0.18 | 1800 | 1.6576 | 13.6733 | 0.5334 | | 2.6416 | 0.19 | 1900 | 1.6514 | 13.7184 | 0.5350 | | 3.0841 | 0.2 | 2000 | 1.6448 | 13.9079 | 0.5357 | | 2.5039 | 0.21 | 2100 | 1.6375 | 13.9860 | 0.5361 | | 2.5829 | 0.22 | 2200 | 1.6366 | 13.9246 | 0.5328 | | 2.5332 | 0.23 | 2300 | 1.6348 | 13.4895 | 0.5209 | | 2.5832 | 0.24 | 2400 | 1.6240 | 14.0445 | 0.5349 | | 2.8577 | 0.25 | 2500 | 1.6182 | 14.1085 | 0.5344 | | 2.9157 | 0.26 | 2600 | 1.6285 | 13.7982 | 0.5365 | | 2.6758 | 0.27 | 2700 | 1.6249 | 13.8638 | 0.5392 | | 2.0391 | 0.28 | 2800 | 1.6205 | 13.9645 | 0.5396 | | 2.8146 | 0.29 | 2900 | 1.6210 | 14.2823 | 0.5409 | | 2.6602 | 0.3 | 3000 | 1.6219 | 13.9663 | 0.5391 | | 1.7745 | 0.31 | 3100 | 1.6088 | 14.4206 | 0.5413 | | 2.3483 | 0.32 | 3200 | 1.6050 | 14.6208 | 0.5471 | | 1.9911 | 0.33 | 3300 | 1.6004 | 14.5458 | 0.5396 | | 1.8973 | 0.34 | 3400 | 1.5985 | 14.5387 | 0.5400 | | 2.6956 | 0.35 | 3500 | 1.6005 | 14.7482 | 0.5458 | | 2.322 | 0.36 | 3600 | 1.5949 | 14.7322 | 0.5448 | | 1.5147 | 0.37 | 3700 | 1.5966 | 14.8456 | 0.5431 | | 2.0606 | 0.38 | 3800 | 1.5899 | 14.6267 | 0.5333 | | 3.0341 | 0.39 | 3900 | 1.5842 | 14.7705 | 0.5414 | | 1.5069 | 0.4 | 4000 | 1.5911 | 14.6861 | 0.5372 | | 2.339 | 0.41 | 4100 | 1.5949 | 14.6970 | 0.5481 | | 2.5221 | 0.42 | 4200 | 1.5870 | 14.6996 | 0.5403 | | 1.6398 | 0.43 | 4300 | 1.5790 | 14.8826 | 0.5431 | | 2.2758 | 0.44 | 4400 | 1.5818 | 14.5580 | 0.5375 | | 2.2622 | 0.45 | 4500 | 1.5821 | 15.0062 | 0.5428 | | 1.3329 | 0.46 | 4600 | 1.5792 | 14.7609 | 0.5377 | | 1.7537 | 0.47 | 4700 | 1.5744 | 15.1037 | 0.5425 | | 2.5379 | 0.48 | 4800 | 1.5756 | 15.2684 | 0.5479 | | 2.1236 | 0.49 | 4900 | 1.5822 | 14.8229 | 0.5478 | | 2.9621 | 0.5 | 5000 | 1.5747 | 14.9948 | 0.5443 | | 1.9832 | 0.51 | 5100 | 1.5838 | 14.8682 | 0.5468 | | 1.4962 | 0.52 | 5200 | 1.5836 | 14.8094 | 0.5397 | | 2.4318 | 0.53 | 5300 | 1.5826 | 14.8213 | 0.5422 | | 1.9338 | 0.54 | 5400 | 1.5869 | 14.5571 | 0.5402 | | 1.404 | 0.55 | 5500 | 1.5891 | 14.5103 | 0.5414 | | 2.2803 | 0.56 | 5600 | 1.5864 | 14.6338 | 0.5417 | | 2.3725 | 0.57 | 5700 | 1.5893 | 14.3405 | 0.5385 | | 1.1436 | 0.58 | 5800 | 1.5703 | 15.3309 | 0.5457 | | 2.1695 | 0.59 | 5900 | 1.5690 | 15.3571 | 0.5438 | | 1.7295 | 0.6 | 6000 | 1.5653 | 15.3547 | 0.5421 | | 1.3033 | 0.61 | 6100 | 1.5649 | 15.3084 | 0.5442 | | 2.396 | 0.62 | 6200 | 1.5592 | 15.5594 | 0.5440 | | 2.133 | 0.63 | 6300 | 1.5634 | 15.3689 | 0.5420 | | 1.1775 | 0.64 | 6400 | 1.5639 | 15.4869 | 0.5389 | | 2.0793 | 0.65 | 6500 | 1.5541 | 15.6320 | 0.5453 | | 1.7569 | 0.66 | 6600 | 1.5588 | 15.7405 | 0.5429 | | 1.1035 | 0.67 | 6700 | 1.5520 | 15.7011 | 0.5450 | | 1.5799 | 0.68 | 6800 | 1.5517 | 15.9203 | 0.5490 | | 1.7737 | 0.69 | 6900 | 1.5473 | 15.8992 | 0.5480 | | 1.3071 | 0.7 | 7000 | 1.5491 | 15.7140 | 0.5446 | | 2.2214 | 0.71 | 7100 | 1.5460 | 15.9360 | 0.5479 | | 1.7848 | 0.72 | 7200 | 1.5431 | 15.9338 | 0.5490 | | 1.1231 | 0.73 | 7300 | 1.5398 | 15.8774 | 0.5444 | | 1.7741 | 0.74 | 7400 | 1.5399 | 15.9724 | 0.5451 | | 1.7098 | 0.75 | 7500 | 1.5361 | 15.9098 | 0.5447 | | 1.0787 | 0.76 | 7600 | 1.5393 | 15.9781 | 0.5457 | | 1.9856 | 0.77 | 7700 | 1.5348 | 15.9521 | 0.5462 | | 2.1294 | 0.78 | 7800 | 1.5345 | 16.0042 | 0.5463 | | 1.1938 | 0.79 | 7900 | 1.5314 | 16.0554 | 0.5495 | | 1.9579 | 0.8 | 8000 | 1.5307 | 15.9349 | 0.5482 | | 1.844 | 0.81 | 8100 | 1.5285 | 15.8589 | 0.5448 | | 1.1464 | 0.82 | 8200 | 1.5413 | 15.9210 | 0.5435 | | 2.2903 | 0.83 | 8300 | 1.5230 | 16.0164 | 0.5405 | | 2.1489 | 0.84 | 8400 | 1.5263 | 15.9423 | 0.5443 | | 1.8138 | 0.85 | 8500 | 1.5350 | 15.8267 | 0.5464 | | 2.4025 | 0.86 | 8600 | 1.5275 | 15.8493 | 0.5430 | | 1.6758 | 0.87 | 8700 | 1.5206 | 15.9246 | 0.5464 | | 1.3671 | 0.88 | 8800 | 1.5235 | 15.9662 | 0.5460 | | 2.3341 | 0.89 | 8900 | 1.5221 | 16.0465 | 0.5456 | | 1.8405 | 0.9 | 9000 | 1.5201 | 16.0834 | 0.5454 | | 1.4133 | 0.91 | 9100 | 1.5250 | 15.8619 | 0.5442 | | 2.4374 | 0.92 | 9200 | 1.5261 | 15.8174 | 0.5429 | | 1.3627 | 0.93 | 9300 | 1.5257 | 15.7541 | 0.5450 | | 1.5003 | 0.94 | 9400 | 1.5249 | 15.9109 | 0.5463 | | 2.2002 | 0.95 | 9500 | 1.5252 | 15.8338 | 0.5434 | | 2.3461 | 0.96 | 9600 | 1.5262 | 15.9195 | 0.5469 | | 1.2607 | 0.97 | 9700 | 1.5197 | 15.8370 | 0.5459 | | 2.3737 | 0.98 | 9800 | 1.5178 | 16.0579 | 0.5475 | | 1.3968 | 0.99 | 9900 | 1.5132 | 16.1729 | 0.5522 | | 1.1816 | 1.0 | 10000 | 1.5150 | 16.0852 | 0.5512 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
swl-models/ShyakuJXMix-v1.0
swl-models
2023-06-21T16:59:03Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-21T16:03:58Z
--- license: creativeml-openrail-m ---
keremnazliel/distilbert_squad_for_musique_3
keremnazliel
2023-06-21T16:58:29Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-06-21T16:54:55Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert_squad_for_musique_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_squad_for_musique_3 This model is a fine-tuned version of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.5452 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
antokprasetyo/Anggittt
antokprasetyo
2023-06-21T16:57:52Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-21T16:55:58Z
--- license: creativeml-openrail-m ---
mandliya/ppo-LunarLander-v2
mandliya
2023-06-21T16:56:50Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-21T07:37:09Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 267.94 +/- 15.09 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Curiolearner/dqn-SpaceInvadersNoFrameskip-v4
Curiolearner
2023-06-21T16:49:56Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-21T16:49:21Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 549.50 +/- 96.42 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Curiolearner -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Curiolearner -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Curiolearner ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
deepghs/anime_ch_eye_color
deepghs
2023-06-21T16:46:41Z
0
0
null
[ "onnx", "art", "image-classification", "dataset:deepghs/anime_ch_eye_color", "license:mit", "region:us" ]
image-classification
2023-06-14T03:34:13Z
--- license: mit datasets: - deepghs/anime_ch_eye_color metrics: - accuracy - f1 pipeline_tag: image-classification tags: - art --- | Name | FLOPS | Params | Accuracy | AUC | Confusion | Labels | |:-------------------:|:-------:|:--------:|:----------:|:------:|:---------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------:| | caformer_s36_raw | 22.10G | 37.24M | 57.27% | 0.9246 | [confusion](https://huggingface.co/deepghs/anime_ch_eye_color/blob/main/caformer_s36_raw/plot_confusion.png) | `aqua`, `blue`, `brown`, `orange`, `golden`, `yellow`, `pink`, `purple`, `red`, `grey`, `silver`, `white`, `black`, `green` | | caformer_s36_v0 | 22.10G | 37.23M | 64.18% | 0.9278 | [confusion](https://huggingface.co/deepghs/anime_ch_eye_color/blob/main/caformer_s36_v0/plot_confusion.png) | `aqua`, `blue`, `green`, `brown`, `orange`, `yellow`, `pink`, `purple`, `red`, `light`, `black` | | mobilenetv3_v0_dist | 0.63G | 4.18M | 60.66% | 0.9201 | [confusion](https://huggingface.co/deepghs/anime_ch_eye_color/blob/main/mobilenetv3_v0_dist/plot_confusion.png) | `aqua`, `blue`, `green`, `brown`, `orange`, `yellow`, `pink`, `purple`, `red`, `light`, `black` |
keremnazliel/distilbert_squad_for_musique_2
keremnazliel
2023-06-21T16:46:16Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-06-21T15:28:39Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert_squad_for_musique_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_squad_for_musique_2 This model is a fine-tuned version of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.5452 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
deepghs/anime_ch_hair_color
deepghs
2023-06-21T16:43:50Z
0
1
null
[ "onnx", "art", "image-classification", "dataset:deepghs/anime_ch_hair_color", "license:mit", "region:us" ]
image-classification
2023-06-14T03:26:04Z
--- license: mit datasets: - deepghs/anime_ch_hair_color metrics: - accuracy - f1 pipeline_tag: image-classification tags: - art --- | Name | FLOPS | Params | Accuracy | AUC | Confusion | Labels | |:----------------------:|:-------:|:--------:|:----------:|:------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:| | caformer_s36_raw | 22.10G | 37.23M | 65.55% | 0.9382 | [confusion](https://huggingface.co/deepghs/anime_ch_hair_color/blob/main/caformer_s36_raw/plot_confusion.png) | `aqua`, `blue`, `brown`, `orange`, `pink`, `purple`, `red`, `grey`, `silver`, `white`, `black`, `green` | | caformer_s36_v0 | 22.10G | 37.23M | 75.06% | 0.9521 | [confusion](https://huggingface.co/deepghs/anime_ch_hair_color/blob/main/caformer_s36_v0/plot_confusion.png) | `aqua`, `blue`, `green`, `brown`, `orange`, `pink`, `purple`, `red`, `light`, `black` | | caformer_s36_v0_ncerce | 22.10G | 37.23M | 75.03% | 0.9357 | [confusion](https://huggingface.co/deepghs/anime_ch_hair_color/blob/main/caformer_s36_v0_ncerce/plot_confusion.png) | `aqua`, `blue`, `green`, `brown`, `orange`, `pink`, `purple`, `red`, `light`, `black` | | mobilenetv3_v0_dist | 0.63G | 4.18M | 72.21% | 0.9458 | [confusion](https://huggingface.co/deepghs/anime_ch_hair_color/blob/main/mobilenetv3_v0_dist/plot_confusion.png) | `aqua`, `blue`, `green`, `brown`, `orange`, `pink`, `purple`, `red`, `light`, `black` |
Naseej/noon-7b
Naseej
2023-06-21T16:42:13Z
659
42
transformers
[ "transformers", "pytorch", "bloom", "text-generation", "instructional", "question-answering", "arabic", "ar", "en", "license:bigscience-bloom-rail-1.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-20T19:21:26Z
--- license: bigscience-bloom-rail-1.0 language: - ar - en pipeline_tag: text-generation tags: - instructional - question-answering - arabic widget: - text: اكتب مقال عن الذكاء الصناعي وتطوراته. example_title: Instruction 1 - text: اعط بعض النصائح عن كيفية الحفاظ على حياة صحية. example_title: Instruction 2 - text: ماذا تعرف عن فوائد الصيام؟ example_title: Question 1 - text: قطف إسماعيل 5 تفاحات، وأعطى 2 منها لأخيه، فكم بقي عند إسماعيل من تفاحة؟ example_title: Question 2 --- <img src="https://i.ibb.co/3NzxfFQ/noon-banner.png" alt="noon-banner" border="0" width="85%" height="85%" style="margin:auto; display:block"> ## **Noon - a 7-billion parameter Arabic Large Language Model** We present the 7-billion parameter variant of **Noon**, an Arabic Large Language model based on **BLOOM**, a foundation model released by the [bigscience](https://huggingface.co/bigscience) workshop. Noon was trained with the main focus of having a model that responds to various types of instructions and questions (text generation, code generation, mathematical problems, closed/open-book questions, etc.) We trained the model using the ColossalAI framework which fully supports the HuggingFace library models, and implements different optimization and quantization techniques for billion-scale LLMs. The training data is a combination of Arabic datasets covering multiple tasks, more details are provided in the dataset section. مرحبًا بكم في بطاقة نموذج "نون"! يحتوي "نون" على أكثر من 7 مليار عامل متغير، مما يجعله أكبر نموذج للغة العربية المطروح حتى الآن. تم تدريب هذا النموذج على أكثر من 110,000 سجل بيانات باللغة العربية، والتي تغطي أكثر من 11 ملايين كلمة، تتنوع ما بين إنتاج النصوص، وإنشاء الشفرات، وحل المسائل الرياضية، والأسئلة المغلقة/المفتوحة. تم تدريب هذا النموذج باستخدام تقنيات تدريب متقدمة مثل التدريب الموزع على عدة وحدات معالجة رسومية، وتكييف LoRA (Low Rank Adaptation)، وتحسين ZeRO (Zero Redundancy Optimization). نحن فخورون بتقديم هذا النموذج الذي يمثل قفزة نوعية في تقنية معالجة اللغة العربية. نقدم في الأقسام التالية مزيد من التفاصيل عن كيفية استخدام نموذج "نون" ومختلف الخصائص التقنية المتعلقة بعملية التدريب. على أمل أن يكون هذا النموذج خدمةً للطورين والباحثين العلميين في هذا المجال، ولكل الناطقين باللغة العربية. ### **Usage** The usage of our model only requires the Transformers library, and can be loaded as follows: ```python from transformers import BloomTokenizerFast, BloomForCausalLM, pipeline text="اكتب مقالا من عدة أسطر عن الذكاء الصناعي وتطوراته" prompt = f'Instruction:\n{text}\n\nResponse:' model = BloomForCausalLM.from_pretrained('Naseej/noon-7b') tokenizer = BloomTokenizerFast.from_pretrained('Naseej/noon-7b') generation_pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer) # We recommend the provided hyperparameters for generation # But encourage you to try different values response = generation_pipeline(prompt, pad_token_id=tokenizer.eos_token_id, do_sample=False, num_beams=4, max_length=500, top_p=0.1, top_k=20, repetition_penalty = 3.0, no_repeat_ngram_size=3)[0]['generated_text'] print(response) ``` ### **Training's computational requirements** Noon-7b was trained on 8-A100 GPUs using Distributed multi-GPU training via the [ColossalAI](https://github.com/hpcaitech/ColossalAI) framework. ### **Dataset** To ensure the diversity of data points and satisfy our purpose of instruction-tuning, we collected, labeled, filtered, and reviewed a set of datasets, each tailored to specific instruction types. Noting that all the datasets are in Arabic, they comprise: - [Second version of the Alpaca dataset](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM), generated using GPT4. - Self-instruct records, split between samples generated by us using the [self-instruct](https://github.com/yizhongw/self-instruct) framework, and further translated ones. - The instructional dataset released by [Databricks](https://github.com/databrickslabs/dolly), which comprises high quality human-generated instructions and responses. - [TruthfulQA](https://huggingface.co/datasets/truthful_qa) dataset, to further guide the model on how to truthfully respond to factoid-based questions. - [Grade School Math](https://huggingface.co/datasets/gsm8k) dataset, to enhance the model's performance using chain-of-thought mathematical problems. - Arabic arithmetic problems, generated by us using ChatGPT for further improvement of the model's ability to solve mathematical problems. The full dataset adds up to over **110K** records. ### **Evaluation** Throughout a set of over 4000 Arabic data samples, Noon-7b was automatically evaluated using **OpenAI's [GPT3.5 Turbo](https://platform.openai.com/docs/models)** model. Provided with clear and carefully crafted evaluation criteria (aligning with the model's training objective as well as the syntactic and grammatical rules of the Arabic language), GPT3.5 Turbo was prompted to evaluate each of Noon's responses to an input instruction on a scale of **1 - 5**. We concluded the evaluation by averaging the provided scores, adding up to an impressive final score of **4.07/5**. **NOTE:** Although we acknowledge that this proposed framework is not an exact solution and that it remains an ongoing area of research, we hold the belief that it has the potential to replicate human assessments to a reasonably satisfactory extent. ### **Disclaimer** The generated responses from this AI model are purely algorithmic and should be interpreted with caution. The model's outputs may occasionally exhibit bias, offensive language, or potentially harmful content. It is important to note that these responses do not reflect the personal preferences or viewpoints of the authors or the organization of Naseej. While every effort is made to mitigate the harmfulness of the model's outputs, it is impossible to guarantee complete elimination of biases or offensive content. The model learns from vast amounts of data and may inadvertently replicate or amplify existing societal biases present in the training data. Users are advised to critically evaluate and verify the information provided by the model. Exercise discretion when utilizing the model's responses, particularly in sensitive or controversial topics. We are committed to ongoing research and development to improve the model's performance, minimize biases, and reduce harmful outputs. Your feedback and insights are valuable in helping us achieve these goals.
deepghs/anime_ch_ear
deepghs
2023-06-21T16:34:57Z
0
0
null
[ "onnx", "art", "image-classification", "dataset:deepghs/anime_ch_ear", "license:mit", "region:us" ]
image-classification
2023-06-17T02:15:03Z
--- license: mit datasets: - deepghs/anime_ch_ear metrics: - accuracy - f1 pipeline_tag: image-classification tags: - art --- | Name | FLOPS | Params | Accuracy | AUC | Confusion | Labels | |:-------------------:|:-------:|:--------:|:----------:|:------:|:---------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | caformer_s36_raw | 22.10G | 37.27M | 82.56% | 0.9847 | [confusion](https://huggingface.co/deepghs/anime_ch_ear/blob/main/caformer_s36_raw/plot_confusion.png) | `alpaca`, `bat`, `bear`, `bunny`, `cat`, `cow`, `deer`, `dog`, `ermine`, `ferret`, `fox`, `goat`, `horse`, `jackal`, `lion`, `monkey`, `mouse`, `panda`, `pig`, `pikachu`, `pointed`, `raccoon`, `reindeer`, `robot`, `sheep`, `squirrel`, `tiger`, `wolf`, `none` | | caformer_s36_v0 | 22.10G | 37.27M | 83.33% | 0.9845 | [confusion](https://huggingface.co/deepghs/anime_ch_ear/blob/main/caformer_s36_v0/plot_confusion.png) | `alpaca`, `bat`, `bear`, `bunny`, `cat`, `cow`, `deer`, `dog`, `ermine`, `ferret`, `fox`, `goat`, `horse`, `jackal`, `lion`, `monkey`, `mouse`, `panda`, `pig`, `pikachu`, `pointed`, `raccoon`, `robot`, `sheep`, `squirrel`, `tiger`, `wolf`, `none` | | mobilenetv3_v0_dist | 0.63G | 4.18M | 74.70% | 0.9716 | [confusion](https://huggingface.co/deepghs/anime_ch_ear/blob/main/mobilenetv3_v0_dist/plot_confusion.png) | `alpaca`, `bat`, `bear`, `bunny`, `cat`, `cow`, `deer`, `dog`, `ermine`, `ferret`, `fox`, `goat`, `horse`, `jackal`, `lion`, `monkey`, `mouse`, `panda`, `pig`, `pikachu`, `pointed`, `raccoon`, `robot`, `sheep`, `squirrel`, `tiger`, `wolf`, `none` |
bandrocks/my_awesome_eli5_clm-model
bandrocks
2023-06-21T16:34:57Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-21T16:02:11Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: my_awesome_eli5_clm-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_eli5_clm-model This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.7378 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.8621 | 1.0 | 1148 | 3.7567 | | 3.7762 | 2.0 | 2296 | 3.7399 | | 3.7328 | 3.0 | 3444 | 3.7378 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
Nacholmo/Counterfeit-V2.5-vae-swapped
Nacholmo
2023-06-21T16:34:48Z
34
2
diffusers
[ "diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-10T20:29:47Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image --- # Counterfeit-V2.5 vae swapped, converted to diffusers for your enjoyment. 1. Safetensors to ckpt 2. Swap vae 3. Ckpt to diffusers 4. ?? 5. profit Original model: https://huggingface.co/gsdf/Counterfeit-V2.5
Nacholmo/meinamixv7-diffusers
Nacholmo
2023-06-21T16:34:39Z
23
1
diffusers
[ "diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-03-06T02:08:03Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image --- Original model: https://huggingface.co/Meina/MeinaMix
anrojasor/ppo-LunarLander-v2
anrojasor
2023-06-21T16:33:24Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-21T01:42:17Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 269.39 +/- 23.64 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
koreadaeil/finetuned-bert-piqa
koreadaeil
2023-06-21T16:33:07Z
59
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-06-21T16:27:46Z
--- tags: - generated_from_keras_callback model-index: - name: koreadaeil/finetuned-bert-piqa results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # koreadaeil/finetuned-bert-piqa This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.8264 - Validation Loss: 2.6491 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.8757 | 2.7555 | 0 | | 2.8434 | 2.7213 | 1 | | 2.8264 | 2.6491 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.0 - Tokenizers 0.13.3
Jinouga/brie-larson-v1
Jinouga
2023-06-21T16:23:28Z
32
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-21T16:19:32Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### brie-larson-V1 Dreambooth model trained by Jinouga with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
UMUTeam/spanish_capitalization_punctuation_restoration
UMUTeam
2023-06-21T16:23:15Z
50
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "es", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-06-21T16:11:47Z
--- widget: - text: qué rico está el helado example_title: Example 1 - text: estás bien example_title: Example 2 - text: mi equipo favorito es real madrid example_title: Example 3 language: - es ---
SouhilOuchene/ACPRECBERT_Part2_islem
SouhilOuchene
2023-06-21T16:21:46Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "camembert", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-06-21T16:21:02Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # SouhilOuchene/ACPRECBERT_Part2_islem This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("SouhilOuchene/ACPRECBERT_Part2_islem") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
snailgood/NWsnail
snailgood
2023-06-21T16:19:43Z
0
2
null
[ "arxiv:1910.09700", "license:creativeml-openrail-m", "region:us" ]
null
2023-06-21T15:38:23Z
--- license: creativeml-openrail-m --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
catrabbitbear/dqn-SpaceInvadersNoFrameskip-v4
catrabbitbear
2023-06-21T16:15:29Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-21T16:14:49Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 579.00 +/- 282.12 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga catrabbitbear -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga catrabbitbear -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga catrabbitbear ``` ## Hyperparameters ```python OrderedDict([('batch_size', 64), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 2000000), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
TheFools/Nabilafbynt
TheFools
2023-06-21T16:13:36Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-21T16:02:55Z
--- license: creativeml-openrail-m ---
koreadaeil/my_awesome_eli5_clm-model
koreadaeil
2023-06-21T16:02:39Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "gpt2", "text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-06-21T05:50:23Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: koreadaeil/my_awesome_eli5_clm-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # koreadaeil/my_awesome_eli5_clm-model This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.9069 - Validation Loss: 2.7550 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.0432 | 2.9414 | 0 | | 3.0152 | 2.7736 | 1 | | 2.9069 | 2.7550 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.0 - Tokenizers 0.13.3
swl-models/Sakuramochimix-v1.0
swl-models
2023-06-21T16:02:10Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-21T15:59:02Z
--- license: creativeml-openrail-m ---
pellucid/my_awesome_opus100_model
pellucid
2023-06-21T15:57:28Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "longt5", "text2text-generation", "generated_from_trainer", "dataset:opus100", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-06-21T07:37:46Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - opus100 metrics: - bleu model-index: - name: my_awesome_opus100_model results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: opus100 type: opus100 config: en-ko split: train args: en-ko metrics: - name: Bleu type: bleu value: 0.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_opus100_model This model is a fine-tuned version of [KETI-AIR-Downstream/long-ke-t5-base-translation-aihub-en2ko](https://huggingface.co/KETI-AIR-Downstream/long-ke-t5-base-translation-aihub-en2ko) on the opus100 dataset. It achieves the following results on the evaluation set: - Loss: nan - Bleu: 0.0 - Gen Len: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 250 | nan | 2.9676 | 12.146 | | 2.5985 | 2.0 | 500 | nan | 0.0 | 0.0 | | 2.5985 | 3.0 | 750 | nan | 0.0 | 0.0 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
DioulaD/falcon-7b-qlora-ge-dq
DioulaD
2023-06-21T15:48:22Z
0
0
peft
[ "peft", "region:us" ]
null
2023-06-21T15:48:20Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
DeepLake/Alchemy_Stars_Vocal
DeepLake
2023-06-21T15:47:09Z
0
0
null
[ "vocal", "games", "zh", "ja", "license:unknown", "region:us" ]
null
2023-06-21T07:43:01Z
--- license: unknown language: - zh - ja tags: - vocal - games --- For VITS. Trained with Alchemy Stars vocal data. JP or CN vocal are denoted in the file name. 用于VITS。用《白夜极光》的语音制作。日配JP,中配CN,见于文件名,注意区分。
bemc22/ppo-luna-lander-mark-i
bemc22
2023-06-21T15:46:51Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-20T14:07:52Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 288.66 +/- 12.85 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
swl-models/Shanzhagao-v1
swl-models
2023-06-21T15:45:04Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-21T15:42:34Z
--- license: creativeml-openrail-m ---
swl-models/Entity_404
swl-models
2023-06-21T15:40:31Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-21T15:36:36Z
--- license: creativeml-openrail-m ---
hoyincheung/redpj3B-lora-int8-alpaca
hoyincheung
2023-06-21T15:29:51Z
0
0
peft
[ "peft", "region:us" ]
null
2023-06-21T15:29:50Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0
abhishek-ignite/gpt-neo-1.3b-ignite-3
abhishek-ignite
2023-06-21T15:18:43Z
0
0
peft
[ "peft", "region:us" ]
null
2023-06-21T15:18:41Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
p1atdev/pvc-v3
p1atdev
2023-06-21T15:14:34Z
61
57
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "safetensors", "en", "dataset:p1atdev/pvc", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-03-01T21:29:48Z
--- license: other datasets: - p1atdev/pvc language: - en library_name: diffusers thumbnail: "https://s3.amazonaws.com/moonup/production/uploads/1677743056321-6305db1fcfbde33ef7d480ff.png" tags: - text-to-image - stable-diffusion - safetensors widget: - text: pvc, anime, masterpiece, best quality, exceptional, 1girl, bangs, bare shoulders, beret, black hair, black shorts, blue hair, bracelet, breasts, buttons, colored inner hair, double-breasted, eyewear removed, green headwear, green jacket, grey eyes, grey sky, hat, jacket, jewelry, long hair, looking at viewer, multicolored hair, neck ring, o-ring, off shoulder, rain, round eyewear, shorts, sidelocks, small breasts, solo, sunglasses, wavy hair, wet, zipper example_title: The WD1.5 girl - text: pvc, anime, masterpiece, best quality, exceptional, 1girl, blonde hair, hat, baseball cap, aqua eyes, earrings, hoop earrings, yellow shirt, looking at viewer, upper body, simple background example_title: The blonde hair girl - text: pvc, masterpiece, best quality, exceptional, 1girl, cat ears, red hair, long hair, hairpin, swept bangs, yellow eyes, black jacket, white shirt, blue tie, white gloves, hand up, upper body, looking at viewer, buildings example_title: A red hair girl - text: nendoroid, masterpiece, best quality, exceptional, 1girl, cat ears, red hair, long hair, hairpin, swept bangs, yellow eyes, black jacket, white shirt, blue tie, white gloves, hand up, upper body, looking at viewer, example_title: nendoroid style - text: figma, masterpiece, best quality, exceptional, 1girl, cat ears, red hair, long hair, hairpin, swept bangs, yellow eyes, black jacket, white shirt, blue tie, white gloves, hand up, upper body, looking at viewer, buildings example_title: figma style --- # PVC v3 This model is a latent diffusion model finetuned on Waifu Diffusion v1.5 beta 2 with PVC figure images. You can use Danbooru tags to generate images. ## Downloads <div class="flex flex-col dark:bg-gray-900 rounded-md divide-y dark:divide-gray-800"> <div class="flex justify-between px-4 py-2"> <a class="underline" href="https://huggingface.co/p1atdev/pvc-v3/resolve/checkpoints/pvc-v3-fp16.safetensors">pvc-v3-fp16.safetensors</a> <div>2.58 GB</div> </div> <div class="flex justify-between px-4 py-2"> <a class="underline" href="https://huggingface.co/p1atdev/pvc-v3/resolve/checkpoints/pvc-v3-fp16.ckpt">pvc-v3-fp16.ckpt</a> <div>2.58 GB</div> </div> <div class="flex justify-between px-4 py-2"> <a class="underline" href="https://huggingface.co/p1atdev/pvc-v3/resolve/checkpoints/pvc-v3-fp32.safetensors">pvc-v3-fp32.safetensors</a> <div>5.16 GB</div> </div> <div class="flex justify-between px-4 py-2"> <a class="underline" href="https://huggingface.co/p1atdev/pvc-v3/resolve/checkpoints/pvc-v3-fp32.ckpt">pvc-v3-fp32.ckpt</a> <div>5.16 GB</div> </div> <div class="flex justify-between px-4 py-1"> <a class="underline opacity-75" href="https://huggingface.co/p1atdev/pvc-v3/tree/checkpoints">Show all</a> </div> </div> Please use [WD's vae](https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt) to get good results! Also, you can use [badquality embedding](https://huggingface.co/p1atdev/badquality) in negative prompt! ## Prompt guide ### Trigger words - `pvc` means the pvc material style but not needed always. - `figma` is the figure style that has joints, and more tend to be product thumbnail images. Use with `doll joints` to get better joints. - `nendoroid` means the style of chibi figures. Use with `chibi` to get better results. ### Tips The PVC figure style is closer to the anime style than to the realistic style. So, it is recommended to put `anime` to **positive** prompt or `realistic` to **negative** prompt to get better results sometimes. If you want to avoid too realistic faces, try this! ## Examples <div class="not-prose grid grid-cols-1 lg:grid-cols-2 gap-4"> <div class="border dakr:border-gray-750 dark:bg-gray-850 rounded-md overflow-hidden"> <img class="w-full" src="https://s3.amazonaws.com/moonup/production/uploads/1677723651374-6305db1fcfbde33ef7d480ff.jpeg"/> <div class="px-4 py-2"> <details> <p class="whitespace-pre-line"> masterpiece, best quality, pvc, 1girl, cat ears, blue hair, gradient hair, colored inner hair, long hair, floating hair, blue eyes, school uniform, blue shirt, ribbon, short skirt, thighhighs, zettai ryouiki, school bag, from above, cowboy shot, looking at viewer, wind, street, day, Negative prompt: badquality, oldest, chibi, Steps: 28, Sampler: DPM++ SDE Karras, CFG scale: 10, Seed: 744670484, Size: 576x768, Model hash: 0866b17d46, Model: pvc-v3-fp16, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 1.5, Hires upscaler: Latent </p> </details> </div> </div> <div class="border dakr:border-gray-750 dark:bg-gray-850 rounded-md overflow-hidden"> <img class="w-full" src="https://s3.amazonaws.com/moonup/production/uploads/1677725923219-6305db1fcfbde33ef7d480ff.jpeg"/> <div class="px-4 py-2"> <details> <p class="whitespace-pre-line"> masterpiece, best quality, exceptional, pvc, 1girl, bangs, bare shoulders, beret, black hair, black shorts, blue hair, bracelet, breasts, buttons, colored inner hair, double-breasted, eyewear removed, green headwear, green jacket, grey eyes, grey sky, hat, jacket, jewelry, long hair, looking at viewer, multicolored hair, neck ring, o-ring, off shoulder, rain, round eyewear, shorts, sidelocks, small breasts, solo, sunglasses, wavy hair, wet, zipper, Negative prompt: badquality, oldest, chibi, Steps: 28, Sampler: DPM++ SDE Karras, CFG scale: 10, Seed: 2954169314, Size: 576x768, Model hash: 0866b17d46, Model: pvc-v3-fp16, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 1.5, Hires upscaler: Latent</p> </details> </div> </div> <div class="border dakr:border-gray-750 dark:bg-gray-850 rounded-md overflow-hidden"> <img class="w-full" src="https://s3.amazonaws.com/moonup/production/uploads/1677726308338-6305db1fcfbde33ef7d480ff.jpeg"/> <div class="px-4 py-2"> <details> <p class="whitespace-pre-line"> masterpiece, best quality, exceptional, pvc, 1girl, cat ears, red hair, long hair, hairpin, swept bangs, yellow eyes, black jacket, white shirt, blue tie, white gloves, hand up, upper body, looking at viewer, buildings Negative prompt: badquality, oldest, chibi, realistic Steps: 28, Sampler: DPM++ SDE Karras, CFG scale: 10, Seed: 2320075190, Size: 576x768, Model hash: 0866b17d46, Model: pvc-v3-fp16, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 1.5, Hires upscaler: Latent </p> </details> </div> </div> <div class="border dakr:border-gray-750 dark:bg-gray-850 rounded-md overflow-hidden"> <img class="w-full" src="https://s3.amazonaws.com/moonup/production/uploads/1677730950628-6305db1fcfbde33ef7d480ff.jpeg"/> <div class="px-4 py-2"> <details> <p class="whitespace-pre-line"> masterpiece, best quality, exceptional, pvc, anime, 1boy, grey hair, red eyes, holding gun, handgun, black coat, looking at viewer, dynamic, Negative prompt: badquality, oldest, chibi, Steps: 28, Sampler: DPM++ SDE Karras, CFG scale: 10, Seed: 2543033775, Size: 576x768, Model hash: 0866b17d46, Model: pvc-v3-fp16, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 1.5, Hires upscaler: Latent </p> </details> </div> </div> <div class="border dakr:border-gray-750 dark:bg-gray-850 rounded-md overflow-hidden"> <img class="w-full" src="https://s3.amazonaws.com/moonup/production/uploads/1677734822577-6305db1fcfbde33ef7d480ff.jpeg"/> <div class="px-4 py-2"> <details> <p class="whitespace-pre-line"> masterpiece, best quality, exceptional, figma, 1girl, cat ears, blue hair, high ponytail, parted bangs, white shirt, dress shirt, short sleeves, shorts, looking at viewer, doll joints, Negative prompt: badquality, oldest, chibi Steps: 28, Sampler: DPM++ SDE Karras, CFG scale: 10, Seed: 595390714, Size: 576x768, Model hash: 0866b17d46, Model: pvc-v3-fp16, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 1.5, Hires upscaler: Latent </p> </details> </div> </div> <div class="border dakr:border-gray-750 dark:bg-gray-850 rounded-md overflow-hidden"> <img class="w-full" src="https://s3.amazonaws.com/moonup/production/uploads/1677732154061-6305db1fcfbde33ef7d480ff.jpeg"/> <div class="px-4 py-2"> <details> <p class="whitespace-pre-line"> masterpiece, best quality, exceptional, figma, 1girl, brown hair, bob cut, blunt bangs, expressionless, red track suit, long pants, full body, running, dynamic, looking at viewer, Negative prompt: badquality, oldest, chibi, realistic, Steps: 28, Sampler: DPM++ SDE Karras, CFG scale: 10, Seed: 617339547, Size: 576x768, Model hash: 0866b17d46, Model: pvc-v3-fp16, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 1.5, Hires upscaler: Latent </p> </details> </div> </div> <div class="border dakr:border-gray-750 dark:bg-gray-850 rounded-md overflow-hidden"> <img class="w-full" src="https://s3.amazonaws.com/moonup/production/uploads/1677733209241-6305db1fcfbde33ef7d480ff.jpeg"/> <div class="px-4 py-2"> <details> <p class="whitespace-pre-line"> masterpiece, best quality, exceptional, nendoroid, chibi, masterpiece, best quality, exceptional, 1girl, aqua eyes, baseball cap, blonde hair, closed mouth, earrings, green background, hat, hoop earrings, jewelry, looking at viewer, shirt, short hair, simple background, solo, upper body, yellow shirt, Negative prompt: badquality, oldest, realistic, Steps: 28, Sampler: DPM++ SDE Karras, CFG scale: 10, Seed: 3673139852, Size: 576x768, Model hash: 0866b17d46, Model: pvc-v3-fp16, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 1.5, Hires upscaler: Latent </p> </details> </div> </div> <div class="border dakr:border-gray-750 dark:bg-gray-850 rounded-md overflow-hidden"> <img class="w-full" src="https://s3.amazonaws.com/moonup/production/uploads/1677733514916-6305db1fcfbde33ef7d480ff.jpeg"/> <div class="px-4 py-2"> <details> <p class="whitespace-pre-line"> masterpiece, best quality, exceptional, nendoroid, chibi, masterpiece, best quality, exceptional, 1girl, bare shoulders, baseball cap, black gloves, black headwear, black shirt, blue eyes, blue hair, breasts, coat, crop top, gloves, hand on hip, hat, large breasts, long hair, long sleeves, looking at viewer, mask, midriff, mouth mask, navel, off shoulder, open clothes, open coat, shirt, sleeveless, sleeveless shirt, solo, stomach, upper body, white coat, waifu, Negative prompt: badquality, oldest, realistic, Steps: 28, Sampler: DPM++ SDE Karras, CFG scale: 10, Seed: 3256539262, Size: 576x768, Model hash: 0866b17d46, Model: pvc-v3-fp16, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 1.5, Hires upscaler: Latent </p> </details> </div> </div> <div class="border dakr:border-gray-750 dark:bg-gray-850 rounded-md overflow-hidden"> <img class="w-full" src="https://s3.amazonaws.com/moonup/production/uploads/1677737218611-6305db1fcfbde33ef7d480ff.jpeg"/> <div class="px-4 py-2"> <details> <p class="whitespace-pre-line"> masterpiece, best quality, exceptional, pvc, anime, 1girl, brown hair, school uniform, aqua ribbon, hand up, upper body, looking at viewer, beach, ocean, orange, sky, clouds, sunset, Negative prompt: badquality, oldest, chibi, simple background, bad anatomy, realistic, Steps: 28, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 537083103, Size: 768x576, Model hash: 0866b17d46, Model: pvc-v3-fp16, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 1.5, Hires upscaler: Latent </p> </details> </div> </div> <div class="border dakr:border-gray-750 dark:bg-gray-850 rounded-md overflow-hidden"> <img class="w-full" src="https://s3.amazonaws.com/moonup/production/uploads/1677742433908-6305db1fcfbde33ef7d480ff.jpeg"/> <div class="px-4 py-2"> <details> <p class="whitespace-pre-line"> masterpiece, best quality, exceptional, pvc, anime, 1girl, young, light purple hair, short hair, streaked hair, wavy hair, red eyes, queen, crown, white dress, crossed legs, thighhighs, boots, sitting, close-up, looking at viewer, throne, dark curtains, dark atmosphere Negative prompt: badquality, oldest, chibi, simple background, bad anatomy, realistic Steps: 28, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 3981672289, Size: 768x576, Model hash: 0866b17d46, Model: pvc-v3-fp16, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 1.5, Hires upscaler: Latent </p> </details> </div> </div> </div> ## Training information <details> <table> <thead> <tr><th>Parameter</td><td>Value</th></tr> </thead> <tbody> <tr><td>Service</td><td>Runpod</td></tr> <tr><td>GPU</td><td>A5000</td></tr> <tr><td>Notebook</td><td><a href="https://github.com/Linaqruf/kohya-trainer/blob/main/kohya-trainer.ipynb" target="_blank">Linaqruf/kohya-trainer</a></td></tr> <tr><td>Cost</td><td>about $2</td></tr> <tr><td>Hours</td><td>about 6 hours</td></tr> <tr><td>Dataset</td><td>7467 images from p1atdev/pvc</td></tr> <tr><td>Resolution</td><td>896</td></tr> <tr><td>Epochs</td><td>5</td></tr> <tr><td>Optimizer</td><td>Lion</td></tr> <tr><td>LR</td><td>4e-7</td></tr> <tr><td>Scheduler</td><td>cosine_with_restarts</td></tr> <tr><td>Train Batch Size</td><td>1</td></tr> </tbody> </table> </details> ## 🧨 Diffusers Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 in a simple and efficient manner. ```bash pip install diffusers transformers accelerate scipy safetensors pip install --pre xformers ``` Using StableDiffusionPipeline: ```py import torch from diffusers import StableDiffusionPipeline model_id = "p1atdev/pvc-v3" revision = "fp16" # "main" or "fp16" pipe = StableDiffusionPipeline.from_pretrained( model_id, revision=revision, torch_dtype=torch.float16, ) pipe = pipe.to("cuda") pipe.enable_attention_slicing() pipe.enable_xformers_memory_efficient_attention() # required prompt = "pvc, masterpiece, best quality, exceptional, 1girl, cat ears, red hair, long hair, hairpin, swept bangs, yellow eyes, black jacket, white shirt, blue tie, white gloves, hand up, upper body, looking at viewer, buildings" negative_prompt = "nsfw, nude, worst quality, low quality, oldest, bad anatomy" image = pipe( prompt, negative_prompt=negative_prompt, guidance_scale=7.0, num_inference_steps=20 ).images[0] # save image image.save("pvc_figure.png") # or just display it # display(image) ``` Using StableDiffusionLongPromptWeightingPipeline: ```py import torch from diffusers import DiffusionPipeline model_id = "p1atdev/pvc-v3" revision = "fp16" # "main" or "fp16" pipe = DiffusionPipeline.from_pretrained( model_id, revision=revision, torch_dtype=torch.float16, custom_pipeline="lpw_stable_diffusion" ) pipe = pipe.to("cuda") pipe.enable_attention_slicing() pipe.enable_xformers_memory_efficient_attention() # required prompt = """ pvc, anime, masterpiece, best quality, exceptional, 1girl, bangs, bare shoulders, beret, black hair, black shorts, blue hair, bracelet, breasts, buttons, colored inner hair, double-breasted, eyewear removed, green headwear, green jacket, grey eyes, grey sky, hat, jacket, jewelry, long hair, looking at viewer, multicolored hair, neck ring, o-ring, off shoulder, rain, round eyewear, shorts, sidelocks, small breasts, solo, sunglasses, wavy hair, wet, zipper """ # long prompt negative_prompt = "nsfw, nude, worst quality, low quality, oldest, bad anatomy" image = pipe( prompt, negative_prompt=negative_prompt, guidance_scale=7.0, num_inference_steps=20 ).images[0] display(image) ``` ## License PVC v3 is released under the Fair AI Public License 1.0-SD (https://freedevproject.org/faipl-1.0-sd/). If any derivative of this model is made, please share your changes accordingly. Special thanks to ronsor/undeleted (https://undeleted.ronsor.com/) for help with the license.
swl-models/CMixS-v1.0
swl-models
2023-06-21T15:07:35Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-21T14:56:24Z
--- license: creativeml-openrail-m ---
IDEA-CCNL/Erlangshen-TCBert-110M-Classification-Chinese
IDEA-CCNL
2023-06-21T15:05:49Z
39
1
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "classification", "zh", "arxiv:2211.11304", "license:apache-2.0", "autotrain_compatible", "region:us" ]
fill-mask
2022-10-21T10:08:07Z
--- language: - zh license: apache-2.0 tags: - classification inference: false --- # Erlangshen-TCBert-110M-Classification-Chinese - Main Page:[Fengshenbang](https://fengshenbang-lm.com/) - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM) ## 简介 Brief Introduction 110M参数的Topic Classification BERT (TCBert)。 The TCBert with 110M parameters is pre-trained for, not limited to, Chinese topic classification tasks. ## 模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | TCBert | 110M | Chinese | ## 模型信息 Model Information 为了提高模型在话题分类上的效果,我们收集了大量话题分类数据进行基于prompts的预训练。 To improve the model performance on the topic classification task, we collected numerous topic classification datasets for pre-training based on general prompts. ### 下游效果 Performance 我们为每个数据集设计了两个prompt模板。 We customize two prompts templates for each dataset. 第一个prompt模板: For ***prompt template 1***: | Dataset | Prompt template 1 | |---------|:------------------------:| | TNEWS | 下面是一则关于__的新闻: | | CSLDCP | 这一句描述__的内容如下: | | IFLYTEK | 这一句描述__的内容如下: | 第一个prompt模板的微调实验结果: The **fine-tuning** results for prompt template 1: | Model | TNEWS | CLSDCP | IFLYTEK | |-----------------|:------:|:------:|:-------:| | Macbert-base | 55.02 | 57.37 | 51.34 | | Macbert-large | 55.77 | 58.99 | 50.31 | | Erlangshen-1.3B | 57.36 | 62.35 | 53.23 | | TCBert-base<sub>110M-Classification-Chinese | 55.57 | 58.60 | 49.63 | | TCBert-large<sub>330M-Classification-Chinese | 56.17 | 60.06 | 51.34 | | TCBert-1.3B<sub>1.3B-Classification-Chinese | 57.41 | 65.10 | 53.75 | | TCBert-base<sub>110M-Sentence-Embedding-Chinese | 54.68 | 59.78 | 49.40 | | TCBert-large<sub>330M-Sentence-Embedding-Chinese | 55.32 | 62.07 | 51.11 | | TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 57.46 | 65.04 | 53.06 | 第一个prompt模板的句子相似度结果: The **sentence similarity** results for prompt template 1: | | TNEWS | | CSLDCP | | IFLYTEK | | |-----------------|:--------:|:---------:|:---------:|:---------:|:---------:|:---------:| | Model | referece | whitening | reference | whitening | reference | whitening | | Macbert-base | 43.53 | 47.16 | 33.50 | 36.53 | 28.99 | 33.85 | | Macbert-large | 46.17 | 49.35 | 37.65 | 39.38 | 32.36 | 35.33 | | Erlangshen-1.3B | 45.72 | 49.60 | 40.56 | 44.26 | 29.33 | 36.48 | | TCBert-base<sub>110M-Classification-Chinese | 48.61 | 51.99 | 43.31 | 45.15 | 33.45 | 37.28 | | TCBert-large<sub>330M-Classification-Chinese | 50.50 | 52.79 | 52.89 | 53.89 | 34.93 | 38.31 | | TCBert-1.3B<sub>1.3B-Classification-Chinese | 50.80 | 51.59 | 51.93 | 54.12 | 33.96 | 38.08 | | TCBert-base<sub>110M-Sentence-Embedding-Chinese | 45.82 | 47.06 | 42.91 | 43.87 | 33.28 | 34.76 | | TCBert-large<sub>330M-Sentence-Embedding-Chinese | 50.10 | 50.90 | 53.78 | 53.33 | 37.62 | 36.94 | | TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 50.70 | 53.48 | 52.66 | 54.40 | 36.88 | 38.48 | 第二个prompt模板: For ***prompt template 2***: | Dataset | Prompt template 2 | |---------|:------------------------:| | TNEWS | 接下来的新闻,是跟__相关的内容: | | CSLDCP | 接下来的学科,是跟__相关: | | IFLYTEK | 接下来的生活内容,是跟__相关: | 第二个prompt模板的微调结果: The **fine-tuning** results for prompt template 2: | Model | TNEWS | CLSDCP | IFLYTEK | |-----------------|:------:|:------:|:-------:| | Macbert-base | 54.78 | 58.38 | 50.83 | | Macbert-large | 56.77 | 60.22 | 51.63 | | Erlangshen-1.3B | 57.81 | 62.80 | 52.77 | | TCBert-base<sub>110M-Classification-Chinese | 54.58 | 59.16 | 49.80 | | TCBert-large<sub>330M-Classification-Chinese | 56.22 | 61.23 | 50.77 | | TCBert-1.3B<sub>1.3B-Classification-Chinese | 57.41 | 64.82 | 53.34 | | TCBert-base<sub>110M-Sentence-Embedding-Chinese | 54.68 | 59.78 | 49.40 | | TCBert-large<sub>330M-Sentence-Embedding-Chinese | 55.32 | 62.07 | 51.11 | | TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 56.87 | 65.83 | 52.94 | 第二个prompt模板的句子相似度结果: The **sentence similarity** results for prompt template 2: | | TNEWS | | CSLDCP | | IFLYTEK | | |-----------------|:--------:|:---------:|:---------:|:---------:|:---------:|:---------:| | Model | referece | whitening | reference | whitening | reference | whitening | | Macbert-base | 42.29 | 45.22 | 34.23 | 37.48 | 29.62 | 34.13 | | Macbert-large | 46.22 | 49.60 | 40.11 | 44.26 | 32.36 | 35.16 | | Erlangshen-1.3B | 46.17 | 49.10 | 40.45 | 45.88 | 30.36 | 36.88 | | TCBert-base<sub>110M-Classification-Chinese | 48.31 | 51.34 | 43.42 | 45.27 | 33.10 | 36.19 | | TCBert-large<sub>330M-Classification-Chinese | 51.19 | 51.69 | 52.55 | 53.28 | 34.31 | 37.45 | | TCBert-1.3B<sub>1.3B-Classification-Chinese | 52.14 | 52.39 | 51.71 | 53.89 | 33.62 | 38.14 | | TCBert-base<sub>110M-Sentence-Embedding-Chinese | 46.72 | 48.86 | 43.19 | 43.53 | 34.08 | 35.79 | | TCBert-large<sub>330M-Sentence-Embedding-Chinese | 50.65 | 51.94 | 53.84 | 53.67 | 37.74 | 36.65 | | TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 50.75 | 54.78 | 51.43 | 54.34 | 36.48 | 38.36 | 更多关于TCBERTs的细节,请参考我们的技术报告。基于新的数据,我们会更新TCBERTs,请留意我们仓库的更新。 For more details about TCBERTs, please refer to our paper. We may regularly update TCBERTs upon new coming data, please keep an eye on the repo! ## 使用 Usage ### 使用示例 Usage Examples ```python # Prompt-based MLM fine-tuning from transformers import BertForMaskedLM, BertTokenizer import torch # Loading models tokenizer=BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-110M-Classification-Chinese") model=BertForMaskedLM.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-110M-Classification-Chinese") # Prepare the data inputs = tokenizer("下面是一则关于[MASK][MASK]的新闻:怎样的房子才算户型方正?", return_tensors="pt") labels = tokenizer("下面是一则关于房产的新闻:怎样的房子才算户型方正?", return_tensors="pt")["input_ids"] labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) # Output the loss outputs = model(**inputs, labels=labels) loss = outputs.loss ``` ```python # Prompt-based Sentence Similarity # To extract sentence representations. from transformers import BertForMaskedLM, BertTokenizer import torch # Loading models tokenizer=BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-110M-Classification-Chinese") model=BertForMaskedLM.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-110M-Classification-Chinese") # Cosine similarity function cos = torch.nn.CosineSimilarity(dim=0, eps=1e-8) with torch.no_grad(): # To extract sentence representations for training data training_input = tokenizer("怎样的房子才算户型方正?", return_tensors="pt") training_output = BertForMaskedLM(**token_text, output_hidden_states=True) training_representation = torch.mean(training_outputs.hidden_states[-1].squeeze(), dim=0) # To extract sentence representations for training data test_input = tokenizer("下面是一则关于[MASK][MASK]的新闻:股票放量下趺,大资金出逃谁在接盘?", return_tensors="pt") test_output = BertForMaskedLM(**token_text, output_hidden_states=True) test_representation = torch.mean(training_outputs.hidden_states[-1].squeeze(), dim=0) # Calculate similarity scores similarity_score = cos(training_representation, test_representation) ``` ## 引用 Citation 如果您在您的工作中使用了我们的模型,可以引用我们的[技术报告](https://arxiv.org/abs/2211.11304): If you use for your work, please cite the following paper ``` @article{han2022tcbert, title={TCBERT: A Technical Report for Chinese Topic Classification BERT}, author={Han, Ting and Pan, Kunhao and Chen, Xinyu and Song, Dingjie and Fan, Yuchen and Gao, Xinyu and Gan, Ruyi and Zhang, Jiaxing}, journal={arXiv preprint arXiv:2211.11304}, year={2022} } ``` 如果您在您的工作中使用了我们的模型,可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/): You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/): ```text @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
IDEA-CCNL/Erlangshen-TCBert-1.3B-Classification-Chinese
IDEA-CCNL
2023-06-21T15:05:14Z
12
1
transformers
[ "transformers", "pytorch", "bert", "classification", "zh", "arxiv:2211.11304", "license:apache-2.0", "region:us" ]
null
2022-10-21T10:30:50Z
--- language: - zh license: apache-2.0 tags: - classification inference: false --- # Erlangshen-TCBert-1.3B-Classification-Chinese - Main Page:[Fengshenbang](https://fengshenbang-lm.com/) - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM) ## 简介 Brief Introduction 1.3BM参数的Topic Classification BERT (TCBert)。 The TCBert with 1.3BM parameters is pre-trained for, not limited to, Chinese topic classification tasks. ## 模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | TCBert | 1.3BM | Chinese | ## 模型信息 Model Information 为了提高模型在话题分类上的效果,我们收集了大量话题分类数据进行基于prompts的预训练。 To improve the model performance on the topic classification task, we collected numerous topic classification datasets for pre-training based on general prompts. ### 下游效果 Performance 我们为每个数据集设计了两个prompt模板。 We customize two prompts templates for each dataset. 第一个prompt模板: For ***prompt template 1***: | Dataset | Prompt template 1 | |---------|:------------------------:| | TNEWS | 下面是一则关于__的新闻: | | CSLDCP | 这一句描述__的内容如下: | | IFLYTEK | 这一句描述__的内容如下: | 第一个prompt模板的微调实验结果: The **fine-tuning** results for prompt template 1: | Model | TNEWS | CLSDCP | IFLYTEK | |-----------------|:------:|:------:|:-------:| | Macbert-base | 55.02 | 57.37 | 51.34 | | Macbert-large | 55.77 | 58.99 | 50.31 | | Erlangshen-1.3B | 57.36 | 62.35 | 53.23 | | TCBert-base<sub>110M-Classification-Chinese | 55.57 | 58.60 | 49.63 | | TCBert-large<sub>330M-Classification-Chinese | 56.17 | 60.06 | 51.34 | | TCBert-1.3B<sub>1.3B-Classification-Chinese | 57.41 | 65.10 | 53.75 | | TCBert-base<sub>110M-Sentence-Embedding-Chinese | 54.68 | 59.78 | 49.40 | | TCBert-large<sub>330M-Sentence-Embedding-Chinese | 55.32 | 62.07 | 51.11 | | TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 57.46 | 65.04 | 53.06 | 第一个prompt模板的句子相似度结果: The **sentence similarity** results for prompt template 1: | | TNEWS | | CSLDCP | | IFLYTEK | | |-----------------|:--------:|:---------:|:---------:|:---------:|:---------:|:---------:| | Model | referece | whitening | reference | whitening | reference | whitening | | Macbert-base | 43.53 | 47.16 | 33.50 | 36.53 | 28.99 | 33.85 | | Macbert-large | 46.17 | 49.35 | 37.65 | 39.38 | 32.36 | 35.33 | | Erlangshen-1.3B | 45.72 | 49.60 | 40.56 | 44.26 | 29.33 | 36.48 | | TCBert-base<sub>110M-Classification-Chinese | 48.61 | 51.99 | 43.31 | 45.15 | 33.45 | 37.28 | | TCBert-large<sub>330M-Classification-Chinese | 50.50 | 52.79 | 52.89 | 53.89 | 34.93 | 38.31 | | TCBert-1.3B<sub>1.3B-Classification-Chinese | 50.80 | 51.59 | 51.93 | 54.12 | 33.96 | 38.08 | | TCBert-base<sub>110M-Sentence-Embedding-Chinese | 45.82 | 47.06 | 42.91 | 43.87 | 33.28 | 34.76 | | TCBert-large<sub>330M-Sentence-Embedding-Chinese | 50.10 | 50.90 | 53.78 | 53.33 | 37.62 | 36.94 | | TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 50.70 | 53.48 | 52.66 | 54.40 | 36.88 | 38.48 | 第二个prompt模板: For ***prompt template 2***: | Dataset | Prompt template 2 | |---------|:------------------------:| | TNEWS | 接下来的新闻,是跟__相关的内容: | | CSLDCP | 接下来的学科,是跟__相关: | | IFLYTEK | 接下来的生活内容,是跟__相关: | 第二个prompt模板的微调结果: The **fine-tuning** results for prompt template 2: | Model | TNEWS | CLSDCP | IFLYTEK | |-----------------|:------:|:------:|:-------:| | Macbert-base | 54.78 | 58.38 | 50.83 | | Macbert-large | 56.77 | 60.22 | 51.63 | | Erlangshen-1.3B | 57.81 | 62.80 | 52.77 | | TCBert-base<sub>110M-Classification-Chinese | 54.58 | 59.16 | 49.80 | | TCBert-large<sub>330M-Classification-Chinese | 56.22 | 61.23 | 50.77 | | TCBert-1.3B<sub>1.3B-Classification-Chinese | 57.41 | 64.82 | 53.34 | | TCBert-base<sub>110M-Sentence-Embedding-Chinese | 54.68 | 59.78 | 49.40 | | TCBert-large<sub>330M-Sentence-Embedding-Chinese | 55.32 | 62.07 | 51.11 | | TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 56.87 | 65.83 | 52.94 | 第二个prompt模板的句子相似度结果: The **sentence similarity** results for prompt template 2: | | TNEWS | | CSLDCP | | IFLYTEK | | |-----------------|:--------:|:---------:|:---------:|:---------:|:---------:|:---------:| | Model | referece | whitening | reference | whitening | reference | whitening | | Macbert-base | 42.29 | 45.22 | 34.23 | 37.48 | 29.62 | 34.13 | | Macbert-large | 46.22 | 49.60 | 40.11 | 44.26 | 32.36 | 35.16 | | Erlangshen-1.3B | 46.17 | 49.10 | 40.45 | 45.88 | 30.36 | 36.88 | | TCBert-base<sub>110M-Classification-Chinese | 48.31 | 51.34 | 43.42 | 45.27 | 33.10 | 36.19 | | TCBert-large<sub>330M-Classification-Chinese | 51.19 | 51.69 | 52.55 | 53.28 | 34.31 | 37.45 | | TCBert-1.3B<sub>1.3B-Classification-Chinese | 52.14 | 52.39 | 51.71 | 53.89 | 33.62 | 38.14 | | TCBert-base<sub>110M-Sentence-Embedding-Chinese | 46.72 | 48.86 | 43.19 | 43.53 | 34.08 | 35.79 | | TCBert-large<sub>330M-Sentence-Embedding-Chinese | 50.65 | 51.94 | 53.84 | 53.67 | 37.74 | 36.65 | | TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 50.75 | 54.78 | 51.43 | 54.34 | 36.48 | 38.36 | 更多关于TCBERTs的细节,请参考我们的技术报告。基于新的数据,我们会更新TCBERTs,请留意我们仓库的更新。 For more details about TCBERTs, please refer to our paper. We may regularly update TCBERTs upon new coming data, please keep an eye on the repo! ## 使用 Usage ### 使用示例 Usage Examples ```python # Prompt-based MLM fine-tuning from transformers import BertForMaskedLM, BertTokenizer import torch # Loading models tokenizer=BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-1.3B-Classification-Chinese") model=BertForMaskedLM.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-1.3B-Classification-Chinese") # Prepare the data inputs = tokenizer("下面是一则关于[MASK][MASK]的新闻:怎样的房子才算户型方正?", return_tensors="pt") labels = tokenizer("下面是一则关于房产的新闻:怎样的房子才算户型方正?", return_tensors="pt")["input_ids"] labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) # Output the loss outputs = model(**inputs, labels=labels) loss = outputs.loss ``` ```python # Prompt-based Sentence Similarity # To extract sentence representations. from transformers import BertForMaskedLM, BertTokenizer import torch # Loading models tokenizer=BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-1.3B-Classification-Chinese") model=BertForMaskedLM.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-1.3B-Classification-Chinese") # Cosine similarity function cos = torch.nn.CosineSimilarity(dim=0, eps=1e-8) with torch.no_grad(): # To extract sentence representations for training data training_input = tokenizer("怎样的房子才算户型方正?", return_tensors="pt") training_output = BertForMaskedLM(**token_text, output_hidden_states=True) training_representation = torch.mean(training_outputs.hidden_states[-1].squeeze(), dim=0) # To extract sentence representations for training data test_input = tokenizer("下面是一则关于[MASK][MASK]的新闻:股票放量下趺,大资金出逃谁在接盘?", return_tensors="pt") test_output = BertForMaskedLM(**token_text, output_hidden_states=True) test_representation = torch.mean(training_outputs.hidden_states[-1].squeeze(), dim=0) # Calculate similarity scores similarity_score = cos(training_representation, test_representation) ``` ## 引用 Citation 如果您在您的工作中使用了我们的模型,可以引用我们的[技术报告](https://arxiv.org/abs/2211.11304): If you use for your work, please cite the following paper ``` @article{han2022tcbert, title={TCBERT: A Technical Report for Chinese Topic Classification BERT}, author={Han, Ting and Pan, Kunhao and Chen, Xinyu and Song, Dingjie and Fan, Yuchen and Gao, Xinyu and Gan, Ruyi and Zhang, Jiaxing}, journal={arXiv preprint arXiv:2211.11304}, year={2022} } ``` 如果您在您的工作中使用了我们的模型,可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/): You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/): ```text @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
IDEA-CCNL/Erlangshen-TCBert-330M-Classification-Chinese
IDEA-CCNL
2023-06-21T15:04:40Z
8
1
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "classification", "zh", "arxiv:2211.11304", "license:apache-2.0", "autotrain_compatible", "region:us" ]
fill-mask
2022-10-21T10:29:37Z
--- language: - zh license: apache-2.0 tags: - classification inference: false --- # Erlangshen-TCBert-330M-Classification-Chinese - Main Page:[Fengshenbang](https://fengshenbang-lm.com/) - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM) ## 简介 Brief Introduction 330M参数的Topic Classification BERT (TCBert)。 The TCBert with 330M parameters is pre-trained for, not limited to, Chinese topic classification tasks. ## 模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | TCBert | 330M | Chinese | ## 模型信息 Model Information 为了提高模型在话题分类上的效果,我们收集了大量话题分类数据进行基于prompts的预训练。 To improve the model performance on the topic classification task, we collected numerous topic classification datasets for pre-training based on general prompts. ### 下游效果 Performance 我们为每个数据集设计了两个prompt模板。 We customize two prompts templates for each dataset. 第一个prompt模板: For ***prompt template 1***: | Dataset | Prompt template 1 | |---------|:------------------------:| | TNEWS | 下面是一则关于__的新闻: | | CSLDCP | 这一句描述__的内容如下: | | IFLYTEK | 这一句描述__的内容如下: | 第一个prompt模板的微调实验结果: The **fine-tuning** results for prompt template 1: | Model | TNEWS | CLSDCP | IFLYTEK | |-----------------|:------:|:------:|:-------:| | Macbert-base | 55.02 | 57.37 | 51.34 | | Macbert-large | 55.77 | 58.99 | 50.31 | | Erlangshen-1.3B | 57.36 | 62.35 | 53.23 | | TCBert-base<sub>110M-Classification-Chinese | 55.57 | 58.60 | 49.63 | | TCBert-large<sub>330M-Classification-Chinese | 56.17 | 60.06 | 51.34 | | TCBert-1.3B<sub>1.3B-Classification-Chinese | 57.41 | 65.10 | 53.75 | | TCBert-base<sub>110M-Sentence-Embedding-Chinese | 54.68 | 59.78 | 49.40 | | TCBert-large<sub>330M-Sentence-Embedding-Chinese | 55.32 | 62.07 | 51.11 | | TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 57.46 | 65.04 | 53.06 | 第一个prompt模板的句子相似度结果: The **sentence similarity** results for prompt template 1: | | TNEWS | | CSLDCP | | IFLYTEK | | |-----------------|:--------:|:---------:|:---------:|:---------:|:---------:|:---------:| | Model | referece | whitening | reference | whitening | reference | whitening | | Macbert-base | 43.53 | 47.16 | 33.50 | 36.53 | 28.99 | 33.85 | | Macbert-large | 46.17 | 49.35 | 37.65 | 39.38 | 32.36 | 35.33 | | Erlangshen-1.3B | 45.72 | 49.60 | 40.56 | 44.26 | 29.33 | 36.48 | | TCBert-base<sub>110M-Classification-Chinese | 48.61 | 51.99 | 43.31 | 45.15 | 33.45 | 37.28 | | TCBert-large<sub>330M-Classification-Chinese | 50.50 | 52.79 | 52.89 | 53.89 | 34.93 | 38.31 | | TCBert-1.3B<sub>1.3B-Classification-Chinese | 50.80 | 51.59 | 51.93 | 54.12 | 33.96 | 38.08 | | TCBert-base<sub>110M-Sentence-Embedding-Chinese | 45.82 | 47.06 | 42.91 | 43.87 | 33.28 | 34.76 | | TCBert-large<sub>330M-Sentence-Embedding-Chinese | 50.10 | 50.90 | 53.78 | 53.33 | 37.62 | 36.94 | | TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 50.70 | 53.48 | 52.66 | 54.40 | 36.88 | 38.48 | 第二个prompt模板: For ***prompt template 2***: | Dataset | Prompt template 2 | |---------|:------------------------:| | TNEWS | 接下来的新闻,是跟__相关的内容: | | CSLDCP | 接下来的学科,是跟__相关: | | IFLYTEK | 接下来的生活内容,是跟__相关: | 第二个prompt模板的微调结果: The **fine-tuning** results for prompt template 2: | Model | TNEWS | CLSDCP | IFLYTEK | |-----------------|:------:|:------:|:-------:| | Macbert-base | 54.78 | 58.38 | 50.83 | | Macbert-large | 56.77 | 60.22 | 51.63 | | Erlangshen-1.3B | 57.81 | 62.80 | 52.77 | | TCBert-base<sub>110M-Classification-Chinese | 54.58 | 59.16 | 49.80 | | TCBert-large<sub>330M-Classification-Chinese | 56.22 | 61.23 | 50.77 | | TCBert-1.3B<sub>1.3B-Classification-Chinese | 57.41 | 64.82 | 53.34 | | TCBert-base<sub>110M-Sentence-Embedding-Chinese | 54.68 | 59.78 | 49.40 | | TCBert-large<sub>330M-Sentence-Embedding-Chinese | 55.32 | 62.07 | 51.11 | | TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 56.87 | 65.83 | 52.94 | 第二个prompt模板的句子相似度结果: The **sentence similarity** results for prompt template 2: | | TNEWS | | CSLDCP | | IFLYTEK | | |-----------------|:--------:|:---------:|:---------:|:---------:|:---------:|:---------:| | Model | referece | whitening | reference | whitening | reference | whitening | | Macbert-base | 42.29 | 45.22 | 34.23 | 37.48 | 29.62 | 34.13 | | Macbert-large | 46.22 | 49.60 | 40.11 | 44.26 | 32.36 | 35.16 | | Erlangshen-1.3B | 46.17 | 49.10 | 40.45 | 45.88 | 30.36 | 36.88 | | TCBert-base<sub>110M-Classification-Chinese | 48.31 | 51.34 | 43.42 | 45.27 | 33.10 | 36.19 | | TCBert-large<sub>330M-Classification-Chinese | 51.19 | 51.69 | 52.55 | 53.28 | 34.31 | 37.45 | | TCBert-1.3B<sub>1.3B-Classification-Chinese | 52.14 | 52.39 | 51.71 | 53.89 | 33.62 | 38.14 | | TCBert-base<sub>110M-Sentence-Embedding-Chinese | 46.72 | 48.86 | 43.19 | 43.53 | 34.08 | 35.79 | | TCBert-large<sub>330M-Sentence-Embedding-Chinese | 50.65 | 51.94 | 53.84 | 53.67 | 37.74 | 36.65 | | TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 50.75 | 54.78 | 51.43 | 54.34 | 36.48 | 38.36 | 更多关于TCBERTs的细节,请参考我们的技术报告。基于新的数据,我们会更新TCBERTs,请留意我们仓库的更新。 For more details about TCBERTs, please refer to our paper. We may regularly update TCBERTs upon new coming data, please keep an eye on the repo! ## 使用 Usage ### 使用示例 Usage Examples ```python # Prompt-based MLM fine-tuning from transformers import BertForMaskedLM, BertTokenizer import torch # Loading models tokenizer=BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-330M-Classification-Chinese") model=BertForMaskedLM.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-330M-Classification-Chinese") # Prepare the data inputs = tokenizer("下面是一则关于[MASK][MASK]的新闻:怎样的房子才算户型方正?", return_tensors="pt") labels = tokenizer("下面是一则关于房产的新闻:怎样的房子才算户型方正?", return_tensors="pt")["input_ids"] labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) # Output the loss outputs = model(**inputs, labels=labels) loss = outputs.loss ``` ```python # Prompt-based Sentence Similarity # To extract sentence representations. from transformers import BertForMaskedLM, BertTokenizer import torch # Loading models tokenizer=BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-330M-Classification-Chinese") model=BertForMaskedLM.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-330M-Classification-Chinese") # Cosine similarity function cos = torch.nn.CosineSimilarity(dim=0, eps=1e-8) with torch.no_grad(): # To extract sentence representations for training data training_input = tokenizer("怎样的房子才算户型方正?", return_tensors="pt") training_output = BertForMaskedLM(**token_text, output_hidden_states=True) training_representation = torch.mean(training_outputs.hidden_states[-1].squeeze(), dim=0) # To extract sentence representations for training data test_input = tokenizer("下面是一则关于[MASK][MASK]的新闻:股票放量下趺,大资金出逃谁在接盘?", return_tensors="pt") test_output = BertForMaskedLM(**token_text, output_hidden_states=True) test_representation = torch.mean(training_outputs.hidden_states[-1].squeeze(), dim=0) # Calculate similarity scores similarity_score = cos(training_representation, test_representation) ``` ## 引用 Citation 如果您在您的工作中使用了我们的模型,可以引用我们的[技术报告](https://arxiv.org/abs/2211.11304): If you use for your work, please cite the following paper ``` @article{han2022tcbert, title={TCBERT: A Technical Report for Chinese Topic Classification BERT}, author={Han, Ting and Pan, Kunhao and Chen, Xinyu and Song, Dingjie and Fan, Yuchen and Gao, Xinyu and Gan, Ruyi and Zhang, Jiaxing}, journal={arXiv preprint arXiv:2211.11304}, year={2022} } ``` 如果您在您的工作中使用了我们的模型,可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/): You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/): ```text @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
IDEA-CCNL/Erlangshen-TCBert-110M-Sentence-Embedding-Chinese
IDEA-CCNL
2023-06-21T15:03:22Z
47
5
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "classification", "zh", "arxiv:2211.11304", "license:apache-2.0", "autotrain_compatible", "region:us" ]
fill-mask
2022-10-21T10:27:40Z
--- language: - zh license: apache-2.0 tags: - classification inference: false --- # IDEA-CCNL/Erlangshen-TCBert-110M-Sentence-Embedding-Chinese - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM) - Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/) ## 简介 Brief Introduction 110M参数的句子表征Topic Classification BERT (TCBert)。 The TCBert with 110M parameters is pre-trained for sentence representation for Chinese topic classification tasks. ## 模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 通用 General | 句子表征 | 二郎神 Erlangshen | TCBert (sentence representation) | 110M | Chinese | ## 模型信息 Model Information 为了提高模型在话题分类上句子表征效果,我们收集了大量话题分类数据进行基于prompts的对比学习预训练。 To improve the model performance on sentence representation for the topic classification task, we collected numerous topic classification datasets for contrastive pre-training based on general prompts. ### 下游效果 Performance 我们为每个数据集设计了两个prompt模板。 We customize two prompts templates for each dataset. 第一个prompt模板: For ***prompt template 1***: | Dataset | Prompt template 1 | |---------|:------------------------:| | TNEWS | 下面是一则关于__的新闻: | | CSLDCP | 这一句描述__的内容如下: | | IFLYTEK | 这一句描述__的内容如下: | 第一个prompt模板的微调实验结果: The **fine-tuning** results for prompt template 1: | Model | TNEWS | CLSDCP | IFLYTEK | |-----------------|:------:|:------:|:-------:| | Macbert-base | 55.02 | 57.37 | 51.34 | | Macbert-large | 55.77 | 58.99 | 50.31 | | Erlangshen-1.3B | 57.36 | 62.35 | 53.23 | | TCBert-base<sub>110M-Classification-Chinese | 55.57 | 58.60 | 49.63 | | TCBert-large<sub>330M-Classification-Chinese | 56.17 | 60.06 | 51.34 | | TCBert-1.3B<sub>1.3B-Classification-Chinese | 57.41 | 65.10 | 53.75 | | TCBert-base<sub>110M-Sentence-Embedding-Chinese | 54.68 | 59.78 | 49.40 | | TCBert-large<sub>330M-Sentence-Embedding-Chinese | 55.32 | 62.07 | 51.11 | | TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 57.46 | 65.04 | 53.06 | 第一个prompt模板的句子相似度结果: The **sentence similarity** results for prompt template 1: | | TNEWS | | CSLDCP | | IFLYTEK | | |-----------------|:--------:|:---------:|:---------:|:---------:|:---------:|:---------:| | Model | referece | whitening | reference | whitening | reference | whitening | | Macbert-base | 43.53 | 47.16 | 33.50 | 36.53 | 28.99 | 33.85 | | Macbert-large | 46.17 | 49.35 | 37.65 | 39.38 | 32.36 | 35.33 | | Erlangshen-1.3B | 45.72 | 49.60 | 40.56 | 44.26 | 29.33 | 36.48 | | TCBert-base<sub>110M-Classification-Chinese | 48.61 | 51.99 | 43.31 | 45.15 | 33.45 | 37.28 | | TCBert-large<sub>330M-Classification-Chinese | 50.50 | 52.79 | 52.89 | 53.89 | 34.93 | 38.31 | | TCBert-1.3B<sub>1.3B-Classification-Chinese | 50.80 | 51.59 | 51.93 | 54.12 | 33.96 | 38.08 | | TCBert-base<sub>110M-Sentence-Embedding-Chinese | 45.82 | 47.06 | 42.91 | 43.87 | 33.28 | 34.76 | | TCBert-large<sub>330M-Sentence-Embedding-Chinese | 50.10 | 50.90 | 53.78 | 53.33 | 37.62 | 36.94 | | TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 50.70 | 53.48 | 52.66 | 54.40 | 36.88 | 38.48 | 第二个prompt模板: For ***prompt template 2***: | Dataset | Prompt template 2 | |---------|:------------------------:| | TNEWS | 接下来的新闻,是跟__相关的内容: | | CSLDCP | 接下来的学科,是跟__相关: | | IFLYTEK | 接下来的生活内容,是跟__相关: | 第二个prompt模板的微调结果: The **fine-tuning** results for prompt template 2: | Model | TNEWS | CLSDCP | IFLYTEK | |-----------------|:------:|:------:|:-------:| | Macbert-base | 54.78 | 58.38 | 50.83 | | Macbert-large | 56.77 | 60.22 | 51.63 | | Erlangshen-1.3B | 57.81 | 62.80 | 52.77 | | TCBert-base<sub>110M-Classification-Chinese | 54.58 | 59.16 | 49.80 | | TCBert-large<sub>330M-Classification-Chinese | 56.22 | 61.23 | 50.77 | | TCBert-1.3B<sub>1.3B-Classification-Chinese | 57.41 | 64.82 | 53.34 | | TCBert-base<sub>110M-Sentence-Embedding-Chinese | 54.68 | 59.78 | 49.40 | | TCBert-large<sub>330M-Sentence-Embedding-Chinese | 55.32 | 62.07 | 51.11 | | TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 56.87 | 65.83 | 52.94 | 第二个prompt模板的句子相似度结果: The **sentence similarity** results for prompt template 2: | | TNEWS | | CSLDCP | | IFLYTEK | | |-----------------|:--------:|:---------:|:---------:|:---------:|:---------:|:---------:| | Model | referece | whitening | reference | whitening | reference | whitening | | Macbert-base | 42.29 | 45.22 | 34.23 | 37.48 | 29.62 | 34.13 | | Macbert-large | 46.22 | 49.60 | 40.11 | 44.26 | 32.36 | 35.16 | | Erlangshen-1.3B | 46.17 | 49.10 | 40.45 | 45.88 | 30.36 | 36.88 | | TCBert-base<sub>110M-Classification-Chinese | 48.31 | 51.34 | 43.42 | 45.27 | 33.10 | 36.19 | | TCBert-large<sub>330M-Classification-Chinese | 51.19 | 51.69 | 52.55 | 53.28 | 34.31 | 37.45 | | TCBert-1.3B<sub>1.3B-Classification-Chinese | 52.14 | 52.39 | 51.71 | 53.89 | 33.62 | 38.14 | | TCBert-base<sub>110M-Sentence-Embedding-Chinese | 46.72 | 48.86 | 43.19 | 43.53 | 34.08 | 35.79 | | TCBert-large<sub>330M-Sentence-Embedding-Chinese | 50.65 | 51.94 | 53.84 | 53.67 | 37.74 | 36.65 | | TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 50.75 | 54.78 | 51.43 | 54.34 | 36.48 | 38.36 | 更多关于TCBERTs的细节,请参考我们的技术报告。基于新的数据,我们会更新TCBERTs,请留意我们仓库的更新。 For more details about TCBERTs, please refer to our paper. We may regularly update TCBERTs upon new coming data, please keep an eye on the repo! ## 使用 Usage ### 使用示例 Usage Examples ```python # Prompt-based MLM fine-tuning from transformers import BertForMaskedLM, BertTokenizer import torch # Loading models tokenizer=BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-110M-Sentence-Embedding-Chinese") model=BertForMaskedLM.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-110M-Sentence-Embedding-Chinese") # Prepare the data inputs = tokenizer("下面是一则关于[MASK][MASK]的新闻:怎样的房子才算户型方正?", return_tensors="pt") labels = tokenizer("下面是一则关于房产的新闻:怎样的房子才算户型方正?", return_tensors="pt")["input_ids"] labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) # Output the loss outputs = model(**inputs, labels=labels) loss = outputs.loss ``` ```python # Prompt-based Sentence Similarity # To extract sentence representations. from transformers import BertForMaskedLM, BertTokenizer import torch # Loading models tokenizer=BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-110M-Sentence-Embedding-Chinese") model=BertForMaskedLM.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-110M-Sentence-Embedding-Chinese") # Cosine similarity function cos = torch.nn.CosineSimilarity(dim=0, eps=1e-8) with torch.no_grad(): # To extract sentence representations for training data training_input = tokenizer("怎样的房子才算户型方正?", return_tensors="pt") training_output = BertForMaskedLM(**token_text, output_hidden_states=True) training_representation = torch.mean(training_outputs.hidden_states[-1].squeeze(), dim=0) # To extract sentence representations for training data test_input = tokenizer("下面是一则关于[MASK][MASK]的新闻:股票放量下趺,大资金出逃谁在接盘?", return_tensors="pt") test_output = BertForMaskedLM(**token_text, output_hidden_states=True) test_representation = torch.mean(training_outputs.hidden_states[-1].squeeze(), dim=0) # Calculate similarity scores similarity_score = cos(training_representation, test_representation) ``` ## 引用 Citation 如果您在您的工作中使用了我们的模型,可以引用我们的[技术报告](https://arxiv.org/abs/2211.11304): If you use for your work, please cite the following paper ``` @article{han2022tcbert, title={TCBERT: A Technical Report for Chinese Topic Classification BERT}, author={Han, Ting and Pan, Kunhao and Chen, Xinyu and Song, Dingjie and Fan, Yuchen and Gao, Xinyu and Gan, Ruyi and Zhang, Jiaxing}, journal={arXiv preprint arXiv:2211.11304}, year={2022} } ``` 如果您在您的工作中使用了我们的模型,可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/): You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/): ```text @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
IDEA-CCNL/Erlangshen-TCBert-330M-Sentence-Embedding-Chinese
IDEA-CCNL
2023-06-21T15:01:26Z
344
9
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "classification", "zh", "arxiv:2211.11304", "license:apache-2.0", "autotrain_compatible", "region:us" ]
fill-mask
2022-10-22T05:47:52Z
--- language: - zh license: apache-2.0 tags: - classification inference: false --- # IDEA-CCNL/Erlangshen-TCBert-330M-Sentence-Embedding-Chinese - Main Page:[Fengshenbang](https://fengshenbang-lm.com/) - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM) ## 简介 Brief Introduction 330M参数的句子表征Topic Classification BERT (TCBert)。 The TCBert with 330M parameters is pre-trained for sentence representation for Chinese topic classification tasks. ## 模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 通用 General | 句子表征 | 二郎神 Erlangshen | TCBert (sentence representation) | 330M | Chinese | ## 模型信息 Model Information 为了提高模型在话题分类上句子表征效果,我们收集了大量话题分类数据进行基于prompts的对比学习预训练。 To improve the model performance on sentence representation for the topic classification task, we collected numerous topic classification datasets for contrastive pre-training based on general prompts. ### 下游效果 Performance 我们为每个数据集设计了两个prompt模板。 We customize two prompts templates for each dataset. 第一个prompt模板: For ***prompt template 1***: | Dataset | Prompt template 1 | |---------|:------------------------:| | TNEWS | 下面是一则关于__的新闻: | | CSLDCP | 这一句描述__的内容如下: | | IFLYTEK | 这一句描述__的内容如下: | 第一个prompt模板的微调实验结果: The **fine-tuning** results for prompt template 1: | Model | TNEWS | CLSDCP | IFLYTEK | |-----------------|:------:|:------:|:-------:| | Macbert-base | 55.02 | 57.37 | 51.34 | | Macbert-large | 55.77 | 58.99 | 50.31 | | Erlangshen-1.3B | 57.36 | 62.35 | 53.23 | | TCBert-base<sub>110M-Classification-Chinese | 55.57 | 58.60 | 49.63 | | TCBert-large<sub>330M-Classification-Chinese | 56.17 | 60.06 | 51.34 | | TCBert-1.3B<sub>1.3B-Classification-Chinese | 57.41 | 65.10 | 53.75 | | TCBert-base<sub>110M-Sentence-Embedding-Chinese | 54.68 | 59.78 | 49.40 | | TCBert-large<sub>330M-Sentence-Embedding-Chinese | 55.32 | 62.07 | 51.11 | | TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 57.46 | 65.04 | 53.06 | 第一个prompt模板的句子相似度结果: The **sentence similarity** results for prompt template 1: | | TNEWS | | CSLDCP | | IFLYTEK | | |-----------------|:--------:|:---------:|:---------:|:---------:|:---------:|:---------:| | Model | referece | whitening | reference | whitening | reference | whitening | | Macbert-base | 43.53 | 47.16 | 33.50 | 36.53 | 28.99 | 33.85 | | Macbert-large | 46.17 | 49.35 | 37.65 | 39.38 | 32.36 | 35.33 | | Erlangshen-1.3B | 45.72 | 49.60 | 40.56 | 44.26 | 29.33 | 36.48 | | TCBert-base<sub>110M-Classification-Chinese | 48.61 | 51.99 | 43.31 | 45.15 | 33.45 | 37.28 | | TCBert-large<sub>330M-Classification-Chinese | 50.50 | 52.79 | 52.89 | 53.89 | 34.93 | 38.31 | | TCBert-1.3B<sub>1.3B-Classification-Chinese | 50.80 | 51.59 | 51.93 | 54.12 | 33.96 | 38.08 | | TCBert-base<sub>110M-Sentence-Embedding-Chinese | 45.82 | 47.06 | 42.91 | 43.87 | 33.28 | 34.76 | | TCBert-large<sub>330M-Sentence-Embedding-Chinese | 50.10 | 50.90 | 53.78 | 53.33 | 37.62 | 36.94 | | TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 50.70 | 53.48 | 52.66 | 54.40 | 36.88 | 38.48 | 第二个prompt模板: For ***prompt template 2***: | Dataset | Prompt template 2 | |---------|:------------------------:| | TNEWS | 接下来的新闻,是跟__相关的内容: | | CSLDCP | 接下来的学科,是跟__相关: | | IFLYTEK | 接下来的生活内容,是跟__相关: | 第二个prompt模板的微调结果: The **fine-tuning** results for prompt template 2: | Model | TNEWS | CLSDCP | IFLYTEK | |-----------------|:------:|:------:|:-------:| | Macbert-base | 54.78 | 58.38 | 50.83 | | Macbert-large | 56.77 | 60.22 | 51.63 | | Erlangshen-1.3B | 57.81 | 62.80 | 52.77 | | TCBert-base<sub>110M-Classification-Chinese | 54.58 | 59.16 | 49.80 | | TCBert-large<sub>330M-Classification-Chinese | 56.22 | 61.23 | 50.77 | | TCBert-1.3B<sub>1.3B-Classification-Chinese | 57.41 | 64.82 | 53.34 | | TCBert-base<sub>110M-Sentence-Embedding-Chinese | 54.68 | 59.78 | 49.40 | | TCBert-large<sub>330M-Sentence-Embedding-Chinese | 55.32 | 62.07 | 51.11 | | TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 56.87 | 65.83 | 52.94 | 第二个prompt模板的句子相似度结果: The **sentence similarity** results for prompt template 2: | | TNEWS | | CSLDCP | | IFLYTEK | | |-----------------|:--------:|:---------:|:---------:|:---------:|:---------:|:---------:| | Model | referece | whitening | reference | whitening | reference | whitening | | Macbert-base | 42.29 | 45.22 | 34.23 | 37.48 | 29.62 | 34.13 | | Macbert-large | 46.22 | 49.60 | 40.11 | 44.26 | 32.36 | 35.16 | | Erlangshen-1.3B | 46.17 | 49.10 | 40.45 | 45.88 | 30.36 | 36.88 | | TCBert-base<sub>110M-Classification-Chinese | 48.31 | 51.34 | 43.42 | 45.27 | 33.10 | 36.19 | | TCBert-large<sub>330M-Classification-Chinese | 51.19 | 51.69 | 52.55 | 53.28 | 34.31 | 37.45 | | TCBert-1.3B<sub>1.3B-Classification-Chinese | 52.14 | 52.39 | 51.71 | 53.89 | 33.62 | 38.14 | | TCBert-base<sub>110M-Sentence-Embedding-Chinese | 46.72 | 48.86 | 43.19 | 43.53 | 34.08 | 35.79 | | TCBert-large<sub>330M-Sentence-Embedding-Chinese | 50.65 | 51.94 | 53.84 | 53.67 | 37.74 | 36.65 | | TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 50.75 | 54.78 | 51.43 | 54.34 | 36.48 | 38.36 | 更多关于TCBERTs的细节,请参考我们的技术报告。基于新的数据,我们会更新TCBERTs,请留意我们仓库的更新。 For more details about TCBERTs, please refer to our paper. We may regularly update TCBERTs upon new coming data, please keep an eye on the repo! ## 使用 Usage ### 使用示例 Usage Examples ```python # Prompt-based MLM fine-tuning from transformers import BertForMaskedLM, BertTokenizer import torch # Loading models tokenizer=BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-330M-Sentence-Embedding-Chinese") model=BertForMaskedLM.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-330M-Sentence-Embedding-Chinese") # Prepare the data inputs = tokenizer("下面是一则关于[MASK][MASK]的新闻:怎样的房子才算户型方正?", return_tensors="pt") labels = tokenizer("下面是一则关于房产的新闻:怎样的房子才算户型方正?", return_tensors="pt")["input_ids"] labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) # Output the loss outputs = model(**inputs, labels=labels) loss = outputs.loss ``` ```python # Prompt-based Sentence Similarity # To extract sentence representations. from transformers import BertForMaskedLM, BertTokenizer import torch # Loading models tokenizer=BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-330M-Sentence-Embedding-Chinese") model=BertForMaskedLM.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-330M-Sentence-Embedding-Chinese") # Cosine similarity function cos = torch.nn.CosineSimilarity(dim=0, eps=1e-8) with torch.no_grad(): # To extract sentence representations for training data training_input = tokenizer("怎样的房子才算户型方正?", return_tensors="pt") training_output = BertForMaskedLM(**token_text, output_hidden_states=True) training_representation = torch.mean(training_outputs.hidden_states[-1].squeeze(), dim=0) # To extract sentence representations for training data test_input = tokenizer("下面是一则关于[MASK][MASK]的新闻:股票放量下趺,大资金出逃谁在接盘?", return_tensors="pt") test_output = BertForMaskedLM(**token_text, output_hidden_states=True) test_representation = torch.mean(training_outputs.hidden_states[-1].squeeze(), dim=0) # Calculate similarity scores similarity_score = cos(training_representation, test_representation) ``` ## 引用 Citation 如果您在您的工作中使用了我们的模型,可以引用我们的[技术报告](https://arxiv.org/abs/2211.11304): If you use for your work, please cite the following paper ``` @article{han2022tcbert, title={TCBERT: A Technical Report for Chinese Topic Classification BERT}, author={Han, Ting and Pan, Kunhao and Chen, Xinyu and Song, Dingjie and Fan, Yuchen and Gao, Xinyu and Gan, Ruyi and Zhang, Jiaxing}, journal={arXiv preprint arXiv:2211.11304}, year={2022} } ``` 如果您在您的工作中使用了我们的模型,可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/): You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/): ```text @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
SRDdev/QABERT-small
SRDdev
2023-06-21T15:00:00Z
70
0
transformers
[ "transformers", "pytorch", "safetensors", "distilbert", "question-answering", "en", "dataset:squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2023-02-08T12:40:31Z
--- datasets: - squad_v2 language: - en metrics: - accuracy library_name: transformers pipeline_tag: question-answering tags: - question-answering --- # QA-BERT QA-BERT is a Question Answering Model. This model is a lighter version of any of the question-answering models out there. ## Dataset The Stanford Question Answering Dataset (SQuAD) is a widely used benchmark dataset for the task of machine reading comprehension. It consists of over 100,000 question-answer pairs based on a set of Wikipedia articles. The goal is to train models that can answer questions based on their understanding of the given text passages. SQuAD has played a significant role in advancing the state-of-the-art in this field and remains a popular choice for researchers and practitioners alike. Due to GPU limitations, this version is trained on `30k samples` from the Stanford Question Answering Dataset. <details> <summary><i>Structure of the Data Dictonary</i></summary> <!--All you need is a blank line--> { "data":[ { "title":"Article Title", "paragraphs":[ { "context":"The context text of the paragraph", "qas":[ { "question":"The question asked about the context", "id":"A unique identifier for the question", "answers":[ { "text":"The answer to the question", "answer_start":"The starting index of the answer in the context" } ] } ] } ] } ], "version":"The version of the SQuAD dataset" } </details> ## Model BERT (Bidirectional Encoder Representations from Transformers) is a pre-trained transformer-based model for natural language processing tasks such as question answering. BERT is fine-tuned for question answering by adding a linear layer on top of the pre-trained BERT representations to predict the start and end of the answer in the input context. BERT has achieved state-of-the-art results on multiple benchmark datasets, including the Stanford Question Answering Dataset (SQuAD). The fine-tuning process allows BERT to effectively capture the relationships between questions and answers and generate accurate answers. <img src="https://imgs.search.brave.com/F8m-nwp6EIG5vq--OmJLrCDpIkuX6tEQ_kyFKQjlUTs/rs:fit:1200:1200:1/g:ce/aHR0cHM6Ly9ibG9n/LmdyaWRkeW5hbWlj/cy5jb20vY29udGVu/dC9pbWFnZXMvMjAy/MC8xMC9TbGljZS0x/OC5wbmc"> For more detail about this read [Understanding QABERT](https://github.com/SRDdev/AnswerMind) ## Inference _Load model_ ```python from transformers import AutoTokenizer, AutoModelForQuestionAnswering QAtokenizer = AutoTokenizer.from_pretrained("SRDdev/QABERT-small") QAmodel = AutoModelForQuestionAnswering.from_pretrained("SRDdev/QABERT-small") ``` _context_ ```text Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a question-answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune a model on a SQuAD task, you may leverage the examples/pytorch/question-answering/run_squad.py script. ``` _Build Pipeline_ ```python from transformers import pipeline ask = pipeline("question-answering", model= QAmodel , tokenizer = QAtokenizer) result = ask(question="What is a good example of a question answering dataset?", context=context) print(f"Answer: '{result['answer']}'") ``` ## Contributing Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change. Please make sure to update tests as appropriate. ## Citations ``` @citation{ QA-BERT-small, author = {Shreyas Dixit}, year = {2023}, url = {https://huggingface.co/SRDdev/QA-BERT-small} } ```
helenpy/distilbert-base-uncased-finetuned-tass
helenpy
2023-06-21T14:44:11Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-21T14:41:27Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-tass results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-tass This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9866 - Accuracy: 0.5170 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9944 | 1.0 | 401 | 1.0210 | 0.4761 | | 0.8994 | 2.0 | 802 | 0.9866 | 0.5170 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
KoRiF/codeparrot-ds
KoRiF
2023-06-21T14:43:58Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-21T13:33:13Z
--- license: mit tags: - generated_from_trainer model-index: - name: codeparrot-ds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codeparrot-ds This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
tux/LunarLanderV2_ppo_from_scratch
tux
2023-06-21T14:36:38Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-06-21T14:32:40Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -151.29 +/- 39.57 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters
EleutherAI/gpt-j-6b
EleutherAI
2023-06-21T14:33:36Z
256,005
1,477
transformers
[ "transformers", "pytorch", "tf", "jax", "gptj", "text-generation", "causal-lm", "en", "dataset:EleutherAI/pile", "arxiv:2104.09864", "arxiv:2101.00027", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- language: - en tags: - pytorch - causal-lm license: apache-2.0 datasets: - EleutherAI/pile --- # GPT-J 6B ## Model Description GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters. <figure> | Hyperparameter | Value | |----------------------|------------| | \\(n_{parameters}\\) | 6053381344 | | \\(n_{layers}\\) | 28&ast; | | \\(d_{model}\\) | 4096 | | \\(d_{ff}\\) | 16384 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 256 | | \\(n_{ctx}\\) | 2048 | | \\(n_{vocab}\\) | 50257/50400&dagger; (same tokenizer as GPT-2/3) | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | <figcaption><p><strong>&ast;</strong> Each layer consists of one feedforward block and one self attention block.</p> <p><strong>&dagger;</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure> The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. ## Intended Use and Limitations GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt. ### Out-of-scope use GPT-J-6B is **not** intended for deployment without fine-tuning, supervision, and/or moderation. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risks associated with your particular use case. GPT-J-6B was trained on an English-language only dataset, and is thus **not** suitable for translation or generating text in other languages. GPT-J-6B has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means GPT-J-6B will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions. ### Limitations and Biases The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") ``` ## Training data GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai). ## Training procedure This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly. ## Evaluation results <figure> | Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) | |--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------| | Random Chance | &check; | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 | | GPT-3 Ada&ddagger; | &cross; | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- | | GPT-2 1.5B | &check; | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 | | GPT-Neo 1.3B&ddagger; | &check; | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 | | Megatron-2.5B&ast; | &cross; | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 | | GPT-Neo 2.7B&ddagger; | &check; | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 | | GPT-3 1.3B&ast;&ddagger; | &cross; | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 | | GPT-3 Babbage&ddagger; | &cross; | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- | | Megatron-8.3B&ast; | &cross; | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 | | GPT-3 2.7B&ast;&ddagger; | &cross; | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 | | Megatron-11B&dagger; | &check; | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 | | **GPT-J 6B&ddagger;** | **&check;** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** | | GPT-3 6.7B&ast;&ddagger; | &cross; | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 | | GPT-3 Curie&ddagger; | &cross; | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- | | GPT-3 13B&ast;&ddagger; | &cross; | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 | | GPT-3 175B&ast;&ddagger; | &cross; | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 | | GPT-3 Davinci&ddagger; | &cross; | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- | <figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p> <p><strong>&ast;</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more details.</p> <p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a> <a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>) Thus, evaluation was not attempted.</p> <p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure> ## Citation and Related Information ### BibTeX entry To cite this model: ```bibtex @misc{gpt-j, author = {Wang, Ben and Komatsuzaki, Aran}, title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` To cite the codebase that trained this model: ```bibtex @misc{mesh-transformer-jax, author = {Wang, Ben}, title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email. ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha. Thanks to everyone who have helped out one way or another (listed alphabetically): - [James Bradbury](https://twitter.com/jekbradbury) for valuable assistance with debugging JAX issues. - [Stella Biderman](https://www.stellabiderman.com), [Eric Hallahan](https://twitter.com/erichallahan), [Kurumuz](https://github.com/kurumuz/), and [Finetune](https://github.com/finetuneanon/) for converting the model to be compatible with the `transformers` package. - [Leo Gao](https://twitter.com/nabla_theta) for running zero shot evaluations for the baseline models for the table. - [Laurence Golding](https://github.com/researcher2/) for adding some features to the web demo. - [Aran Komatsuzaki](https://twitter.com/arankomatsuzaki) for advice with experiment design and writing the blog posts. - [Janko Prester](https://github.com/jprester/) for creating the web demo frontend.
VarunD/ppo-LunarLander-v2
VarunD
2023-06-21T14:23:50Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-21T14:23:28Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 270.22 +/- 12.31 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
orkg/orkgnlp-research-fields-classification
orkg
2023-06-21T14:20:50Z
0
0
null
[ "license:mit", "region:us" ]
null
2023-06-07T13:53:34Z
--- license: mit --- This Repository includes the files required to run the `Research Fields Classification` ORKG-NLP service. Please check [this article](https://orkg-nlp-pypi.readthedocs.io/en/latest/services/services.html) for more details about the service. This model is converted into a [TorchScript](https://pytorch.org/docs/stable/jit.html) (ScriptModule) using [torch.jit.trace](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html).
gsarti/opus-mt-tc-base-en-ja
gsarti
2023-06-21T14:12:24Z
19
0
transformers
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "en", "ja", "multilingual", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-09-09T21:54:27Z
--- language: - en - ja - multilingual license: cc-by-4.0 tags: - translation - opus-mt-tc model-index: - name: opus-mt-tc-base-en-ja results: - task: type: translation name: Translation eng-jpg dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: eng-jpg metrics: - type: bleu value: 15.2 name: BLEU --- # Opus Tatoeba English-Japanese *This model was obtained by running the script [convert_marian_to_pytorch.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/marian/convert_marian_to_pytorch.py). The original models were trained by [J�rg Tiedemann](https://blogs.helsinki.fi/tiedeman/) using the [MarianNMT](https://marian-nmt.github.io/) library. See all available `MarianMTModel` models on the profile of the [Helsinki NLP](https://huggingface.co/Helsinki-NLP) group.* * dataset: opus+bt * model: transformer-align * source language(s): eng * target language(s): jpn * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download: [opus+bt-2021-04-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.zip) * test set translations: [opus+bt-2021-04-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.test.txt) * test set scores: [opus+bt-2021-04-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.eval.txt) ## Benchmarks | testset | BLEU | chr-F | #sent | #words | BP | |---------|-------|-------|-------|--------|----| | Tatoeba-test.eng-jpn | 15.2 | 0.258 | 10000 | 99206 | 1.000 |
Voryoji/Shuichi
Voryoji
2023-06-21T14:05:05Z
2
0
fairseq
[ "fairseq", "deberta-v2", "art", "audio-to-audio", "jv", "ja", "zh", "dataset:QingyiSi/Alpaca-CoT", "doi:10.57967/hf/0791", "license:creativeml-openrail-m", "region:us" ]
audio-to-audio
2023-06-21T12:30:26Z
--- license: creativeml-openrail-m datasets: - QingyiSi/Alpaca-CoT language: - jv - ja - zh metrics: - bleurt library_name: fairseq pipeline_tag: audio-to-audio tags: - art ---
surajp/albert-base-sanskrit
surajp
2023-06-21T13:56:27Z
12
4
transformers
[ "transformers", "pytorch", "safetensors", "albert", "feature-extraction", "sa", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: sa --- # ALBERT-base-Sanskrit Explaination Notebook Colab: [SanskritALBERT.ipynb](https://colab.research.google.com/github/parmarsuraj99/suraj-parmar/blob/master/_notebooks/2020-05-02-SanskritALBERT.ipynb) Size of the model is **46MB** Example of usage: ``` tokenizer = AutoTokenizer.from_pretrained("surajp/albert-base-sanskrit") model = AutoModel.from_pretrained("surajp/albert-base-sanskrit") enc=tokenizer.encode("ॐ सर्वे भवन्तु सुखिनः सर्वे सन्तु निरामयाः । सर्वे भद्राणि पश्यन्तु मा कश्चिद्दुःखभाग्भवेत् । ॐ शान्तिः शान्तिः शान्तिः ॥") print(tokenizer.decode(enc)) ps = model(torch.tensor(enc).unsqueeze(1)) print(ps[0].shape) ``` ``` ''' Output: -------- [CLS] ॐ सर्वे भवन्तु सुखिनः सर्वे सन्तु निरामयाः । सर्वे भद्राणि पश्यन्तु मा कश्चिद्दुःखभाग्भवेत् । ॐ शान्तिः शान्तिः शान्तिः ॥[SEP] torch.Size([28, 1, 768]) ``` > Created by [Suraj Parmar/@parmarsuraj99](https://twitter.com/parmarsuraj99) > Made with <span style="color: #e25555;">&hearts;</span> in India
surajp/RoBERTa-hindi-guj-san
surajp
2023-06-21T13:56:15Z
63
2
transformers
[ "transformers", "pytorch", "jax", "safetensors", "roberta", "fill-mask", "Indic", "hi", "sa", "gu", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - hi - sa - gu tags: - Indic license: mit datasets: - Wikipedia (Hindi, Sanskrit, Gujarati) metrics: - perplexity --- # RoBERTa-hindi-guj-san ## Model description Multillingual RoBERTa like model trained on Wikipedia articles of Hindi, Sanskrit, Gujarati languages. The tokenizer was trained on combined text. However, Hindi text was used to pre-train the model and then it was fine-tuned on Sanskrit and Gujarati Text combined hoping that pre-training with Hindi will help the model learn similar languages. ### Configuration | Parameter | Value | |---|---| | `hidden_size` | 768 | | `num_attention_heads` | 12 | | `num_hidden_layers` | 6 | | `vocab_size` | 30522 | |`model_type`|`roberta`| ## Intended uses & limitations #### How to use ```python # Example usage from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline tokenizer = AutoTokenizer.from_pretrained("surajp/RoBERTa-hindi-guj-san") model = AutoModelWithLMHead.from_pretrained("surajp/RoBERTa-hindi-guj-san") fill_mask = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) # Sanskrit: इयं भाषा न केवलं भारतस्य अपि तु विश्वस्य प्राचीनतमा भाषा इति मन्यते। # Hindi: अगर आप अब अभ्यास नहीं करते हो तो आप अपने परीक्षा में मूर्खतापूर्ण गलतियाँ करोगे। # Gujarati: ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો <mask> હતો. fill_mask("ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો <mask> હતો.") ''' Output: -------- [ {'score': 0.07849744707345963, 'sequence': '<s> ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો જ હતો.</s>', 'token': 390}, {'score': 0.06273336708545685, 'sequence': '<s> ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો ન હતો.</s>', 'token': 478}, {'score': 0.05160355195403099, 'sequence': '<s> ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો થઇ હતો.</s>', 'token': 2075}, {'score': 0.04751499369740486, 'sequence': '<s> ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો એક હતો.</s>', 'token': 600}, {'score': 0.03788900747895241, 'sequence': '<s> ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો પણ હતો.</s>', 'token': 840} ] ``` ## Training data Cleaned wikipedia articles in Hindi, Sanskrit and Gujarati on Kaggle. It contains training as well as evaluation text. Used in [iNLTK](https://github.com/goru001/inltk) - [Hindi](https://www.kaggle.com/disisbig/hindi-wikipedia-articles-172k) - [Gujarati](https://www.kaggle.com/disisbig/gujarati-wikipedia-articles) - [Sanskrit](https://www.kaggle.com/disisbig/sanskrit-wikipedia-articles) ## Training procedure - On TPU (using `xla_spawn.py`) - For language modelling - Iteratively increasing `--block_size` from 128 to 256 over epochs - Tokenizer trained on combined text - Pre-training with Hindi and fine-tuning on Sanskrit and Gujarati texts ``` --model_type distillroberta-base \ --model_name_or_path "/content/SanHiGujBERTa" \ --mlm_probability 0.20 \ --line_by_line \ --save_total_limit 2 \ --per_device_train_batch_size 128 \ --per_device_eval_batch_size 128 \ --num_train_epochs 5 \ --block_size 256 \ --seed 108 \ --overwrite_output_dir \ ``` ## Eval results perplexity = 2.920005983224673 > Created by [Suraj Parmar/@parmarsuraj99](https://twitter.com/parmarsuraj99) | [LinkedIn](https://www.linkedin.com/in/parmarsuraj99/) > Made with <span style="color: #e25555;">&hearts;</span> in India
arcane-impact/gpt_bigcode-santacoder-ggml
arcane-impact
2023-06-21T13:50:23Z
0
0
null
[ "license:openrail", "region:us" ]
null
2023-06-21T13:40:47Z
--- license: openrail --- GGML format of [bigcode/gpt_bigcode-santacoder](https://huggingface.co/bigcode/gpt_bigcode-santacoder)
reinforceYrWay/ppo-Huggy
reinforceYrWay
2023-06-21T13:49:25Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-06-21T13:49:21Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: reinforceYrWay/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀