modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-01 06:28:43
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
546 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-01 06:27:36
card
stringlengths
11
1.01M
TalesLF/ppo-Huggy
TalesLF
2023-06-29T00:42:23Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-06-29T00:42:18Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: TalesLF/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ausboss/llama-30b-supercot-4bit
ausboss
2023-06-29T00:22:48Z
8
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-24T15:14:35Z
Merge of [huggyllama/llama-30b](https://huggingface.co/huggyllama/llama-30b) + [kaiokendev/SuperCOT-LoRA](https://huggingface.co/kaiokendev/SuperCOT-LoRA/edit/main/README.md) Supercot was trained to work with langchain prompting. Load up locally in my custom LLM notebook that uses the Oobabooga modules to load up models: https://github.com/ausboss/Local-LLM-Langchain Then you can add cells from of these other notebooks for testing: https://github.com/gkamradt/langchain-tutorials # From kaiokendev Lora page ### Compatibility This LoRA is compatible with any 7B, 13B or 30B 4-bit quantized LLaMa model, including ggml quantized converted bins ### Prompting You should prompt the LoRA the same way you would prompt Alpaca or Alpacino: ``` Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: <instruction> ### Input: <any additional context. Remove this if it's not neccesary> ### Response: <make sure to leave a single new-line here for optimal results> ``` Remember that with lower parameter sizes, the structure of the prompt becomes more important. The same prompt worded differently can give wildly different answers. Consider using the following suggestion suffixes to improve output quality: - "Think through this step by step" - "Let's think about this logically" - "Explain your reasoning" - "Provide details to support your answer" - "Compare and contrast your answer with alternatives" ### Coming Soon - Tweet fix for 13B and 7B - lower model sizes seem to be extremely sensitive to hashtags at the end of training data responses, especially at longer cutoffs
t3PbMvBN6SXv/Reinforce-CartPole-v1
t3PbMvBN6SXv
2023-06-29T00:18:45Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-06-28T03:20:59Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 430.30 +/- 74.79 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
TheBloke/GPlatty-30B-GGML
TheBloke
2023-06-29T00:01:02Z
0
4
null
[ "arxiv:2302.13971", "license:other", "region:us" ]
null
2023-06-28T22:48:27Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Lilloukas' GPlatty 30B GGML These files are GGML format model files for [Lilloukas' GPlatty 30B](https://huggingface.co/lilloukas/GPlatty-30B). GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as: * [text-generation-webui](https://github.com/oobabooga/text-generation-webui) * [KoboldCpp](https://github.com/LostRuins/koboldcpp) * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) * [ctransformers](https://github.com/marella/ctransformers) ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/GPlatty-30B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/GPlatty-30B-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lilloukas/GPlatty-30B) <!-- compatibility_ggml start --> ## Compatibility ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0` I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`. These are guaranteed to be compatbile with any UIs, tools and libraries released since late May. ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K` These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`. They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | gplatty-30b.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB | 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | gplatty-30b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB | 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | gplatty-30b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB | 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | gplatty-30b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB | 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | gplatty-30b.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB | 20.80 GB | Original llama.cpp quant method, 4-bit. | | gplatty-30b.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB | 22.83 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | gplatty-30b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB | 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | gplatty-30b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB | 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | gplatty-30b.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB | 24.87 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | gplatty-30b.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB | 26.90 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | gplatty-30b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB | 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | gplatty-30b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB | 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | gplatty-30b.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB | 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | | gplatty-30b.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB | 37.06 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `llama.cpp` I use the following command line; adjust for your tastes and needs: ``` ./main -t 10 -ngl 32 -m gplatty-30b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:" ``` If you're able to use full GPU offloading, you should use `-t 1` to get best performance. If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance. Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md). <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Lilloukas' GPlatty 30B # Information GPlatty-30B is a merge of [lilloukas/Platypus-30B](https://huggingface.co/lilloukas/Platypus-30B) and [chansung/gpt4-alpaca-lora-30b](https://huggingface.co/chansung/gpt4-alpaca-lora-30b) | Metric | Value | |-----------------------|-------| | MMLU (5-shot) | 63.6 | | ARC (25-shot) | 66.0 | | HellaSwag (10-shot) | 84.8 | | TruthfulQA (0-shot) | 53.8 | | Avg. | 67.0 | We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above. ## Model Details * **Trained by**: Cole Hunter & Ariel Lee * **Model type:** **GPlatty-30B** is an auto-regressive language model based on the LLaMA transformer architecture. * **Language(s)**: English * **License for base weights**: License for the base LLaMA model's weights is Meta's [non-commercial bespoke license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md). | Hyperparameter | Value | |---------------------------|-------| | \\(n_\text{parameters}\\) | 33B | | \\(d_\text{model}\\) | 6656 | | \\(n_\text{layers}\\) | 60 | | \\(n_\text{heads}\\) | 52 | ## Reproducing Evaluation Results Install LM Evaluation Harness: ``` git clone https://github.com/EleutherAI/lm-evaluation-harness cd lm-evaluation-harness pip install -e . ``` Each task was evaluated on a single A100 80GB GPU. ARC: ``` python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/GPlatty-30B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/arc_challenge_25shot.json --device cuda --num_fewshot 25 ``` HellaSwag: ``` python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/GPlatty-30B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/hellaswag_10shot.json --device cuda --num_fewshot 10 ``` MMLU: ``` python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/GPlatty-30B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/mmlu_5shot.json --device cuda --num_fewshot 5 ``` TruthfulQA: ``` python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/GPlatty-30B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/truthfulqa_0shot.json --device cuda ``` ## Limitations and bias The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior. See Section 5.1 of the LLaMA paper. We have not performed any studies to determine how fine-tuning on the aforementioned datasets affect the model's behavior and toxicity. Do not treat chat responses from this model as a substitute for human judgment or as a source of truth. Please use responsibly. ## Citations ```bibtex @article{touvron2023llama, title={LLaMA: Open and Efficient Foundation Language Models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } @article{hu2021lora, title={LoRA: Low-Rank Adaptation of Large Language Models}, author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu}, journal={CoRR}, year={2021} } ```
anas21/English4SpeechToTextModel
anas21
2023-06-28T23:52:38Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-06-25T18:02:44Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: English4SpeechToTextModel results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # English4SpeechToTextModel This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.1 - train_batch_size: 15 - eval_batch_size: 8 - seed: 8 - gradient_accumulation_steps: 15 - total_train_batch_size: 225 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Gemmar/wav2vec2LugandaASR20
Gemmar
2023-06-28T23:50:32Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_13_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-06-28T11:20:55Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice_13_0 metrics: - wer model-index: - name: wav2vec2LugandaASR20 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_13_0 type: common_voice_13_0 config: lg split: validation args: lg metrics: - name: Wer type: wer value: 0.23221005634102265 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2LugandaASR20 This model is a fine-tuned version of [Gemmar/wav2vec2LugandaASR](https://huggingface.co/Gemmar/wav2vec2LugandaASR) on the common_voice_13_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2393 - Wer: 0.2322 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1093 | 0.18 | 100 | 0.2134 | 0.2480 | | 0.1141 | 0.36 | 200 | 0.2329 | 0.2724 | | 0.1224 | 0.54 | 300 | 0.2560 | 0.2864 | | 0.1345 | 0.72 | 400 | 0.2348 | 0.2716 | | 0.1271 | 0.9 | 500 | 0.2339 | 0.2702 | | 0.1232 | 1.08 | 600 | 0.2457 | 0.2806 | | 0.1149 | 1.27 | 700 | 0.2372 | 0.2695 | | 0.1129 | 1.45 | 800 | 0.2328 | 0.2718 | | 0.1196 | 1.63 | 900 | 0.2326 | 0.2615 | | 0.1185 | 1.81 | 1000 | 0.2249 | 0.2672 | | 0.1159 | 1.99 | 1100 | 0.2202 | 0.2559 | | 0.0933 | 2.17 | 1200 | 0.2302 | 0.2559 | | 0.0947 | 2.35 | 1300 | 0.2306 | 0.2530 | | 0.0941 | 2.53 | 1400 | 0.2325 | 0.2509 | | 0.0946 | 2.71 | 1500 | 0.2233 | 0.2495 | | 0.0949 | 2.89 | 1600 | 0.2320 | 0.2443 | | 0.0883 | 3.07 | 1700 | 0.2383 | 0.2463 | | 0.0783 | 3.25 | 1800 | 0.2386 | 0.2437 | | 0.0753 | 3.43 | 1900 | 0.2329 | 0.2426 | | 0.0772 | 3.62 | 2000 | 0.2317 | 0.2392 | | 0.0774 | 3.8 | 2100 | 0.2308 | 0.2353 | | 0.0764 | 3.98 | 2200 | 0.2293 | 0.2357 | | 0.0666 | 4.16 | 2300 | 0.2446 | 0.2388 | | 0.065 | 4.34 | 2400 | 0.2456 | 0.2359 | | 0.0643 | 4.52 | 2500 | 0.2446 | 0.2345 | | 0.0652 | 4.7 | 2600 | 0.2430 | 0.2325 | | 0.0669 | 4.88 | 2700 | 0.2393 | 0.2322 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
Ibrahim-Alam/finetuning-xlnet-base-cased-on-imdb
Ibrahim-Alam
2023-06-28T23:49:30Z
93
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlnet", "text-classification", "generated_from_trainer", "dataset:imdb", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-26T18:21:35Z
--- license: mit tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-xlnet-base-cased-on-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.95056 - name: F1 type: f1 value: 0.9503813729425933 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-xlnet-base-cased-on-imdb This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.1402 - Accuracy: 0.9506 - F1: 0.9504 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
allenchienxxx/q-FrozenLake-v1-4x4-noSlippery
allenchienxxx
2023-06-28T23:48:04Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-28T23:48:01Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="allenchienxxx/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Mozzipa/qlora-koalpaca-polyglot-12.8b-50step
Mozzipa
2023-06-28T23:43:40Z
2
0
peft
[ "peft", "region:us" ]
null
2023-06-28T23:43:37Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
Alyss97/bert-base-multilingual-uncased-sentiment-finetuned-MeIA-AnalisisDeSentimientos
Alyss97
2023-06-28T23:17:00Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-28T16:34:21Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: bert-base-multilingual-uncased-sentiment-finetuned-MeIA-AnalisisDeSentimientos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-uncased-sentiment-finetuned-MeIA-AnalisisDeSentimientos This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9405 - F1: 0.5939 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.9441 | 1.0 | 766 | 0.9419 | 0.5604 | | 0.7769 | 2.0 | 1532 | 0.9405 | 0.5939 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
NeoCodes-dev/Pyramid_PPO1
NeoCodes-dev
2023-06-28T23:08:21Z
14
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-06-28T23:08:19Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: dergky1/Pyramid_PPO1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Abubakari/finetuned-Sentiment-classfication-ROBERTA-model
Abubakari
2023-06-28T22:50:49Z
122
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-04-27T09:18:54Z
--- license: mit tags: - generated_from_trainer model-index: - name: finetuned-Sentiment-classfication-ROBERTA-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-Sentiment-classfication-ROBERTA-model This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5618 - Rmse: 0.6118 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7273 | 2.0 | 500 | 0.5618 | 0.6118 | | 0.4294 | 4.0 | 1000 | 0.5821 | 0.5906 | | 0.2278 | 6.0 | 1500 | 0.8019 | 0.6235 | | 0.1246 | 8.0 | 2000 | 0.9412 | 0.5961 | | 0.083 | 10.0 | 2500 | 1.1040 | 0.5978 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
jncraton/flan-t5-xl-ct2-int8
jncraton
2023-06-28T22:26:50Z
47
1
transformers
[ "transformers", "text2text-generation", "en", "fr", "ro", "de", "multilingual", "dataset:svakulenk0/qrecc", "dataset:taskmaster2", "dataset:djaym7/wiki_dialog", "dataset:deepmind/code_contests", "dataset:lambada", "dataset:gsm8k", "dataset:aqua_rat", "dataset:esnli", "dataset:quasc", "dataset:qed", "arxiv:2210.11416", "arxiv:1910.09700", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text2text-generation
2023-06-28T20:45:42Z
--- language: - en - fr - ro - de - multilingual widget: - text: "Translate to German: My name is Arthur" example_title: "Translation" - text: "Please answer to the following question. Who is going to be the next Ballon d'or?" example_title: "Question Answering" - text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering." example_title: "Logical reasoning" - text: "Please answer the following question. What is the boiling point of Nitrogen?" example_title: "Scientific knowledge" - text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?" example_title: "Yes/no question" - text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?" example_title: "Reasoning task" - text: "Q: ( False or not False or False ) is? A: Let's think step by step" example_title: "Boolean Expressions" - text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?" example_title: "Math reasoning" - text: "Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?" example_title: "Premise and hypothesis" tags: - text2text-generation datasets: - svakulenk0/qrecc - taskmaster2 - djaym7/wiki_dialog - deepmind/code_contests - lambada - gsm8k - aqua_rat - esnli - quasc - qed license: apache-2.0 --- # Model Card for FLAN-T5 XL ![model image](https://s3.amazonaws.com/moonup/production/uploads/1666363435475-62441d1d9fdefb55a0b7d12c.png) # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Uses](#uses) 4. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 5. [Training Details](#training-details) 6. [Evaluation](#evaluation) 7. [Environmental Impact](#environmental-impact) 8. [Citation](#citation) # TL;DR If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages. As mentioned in the first few lines of the abstract : > Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models. **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large). # Model Details ## Model Description - **Model type:** Language model - **Language(s) (NLP):** English, Spanish, Japanese, Persian, Hindi, French, Chinese, Bengali, Gujarati, German, Telugu, Italian, Arabic, Polish, Tamil, Marathi, Malayalam, Oriya, Panjabi, Portuguese, Urdu, Galician, Hebrew, Korean, Catalan, Thai, Dutch, Indonesian, Vietnamese, Bulgarian, Filipino, Central Khmer, Lao, Turkish, Russian, Croatian, Swedish, Yoruba, Kurdish, Burmese, Malay, Czech, Finnish, Somali, Tagalog, Swahili, Sinhala, Kannada, Zhuang, Igbo, Xhosa, Romanian, Haitian, Estonian, Slovak, Lithuanian, Greek, Nepali, Assamese, Norwegian - **License:** Apache 2.0 - **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5) - **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) - **Resources for more information:** - [Research paper](https://arxiv.org/pdf/2210.11416.pdf) - [GitHub Repo](https://github.com/google-research/t5x) - [Hugging Face FLAN-T5 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/t5) # Usage Find below some example scripts on how to use the model in `transformers`: ## Using the Pytorch model ### Running the model on a CPU <details> <summary> Click to expand </summary> ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl") input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> ### Running the model on a GPU <details> <summary> Click to expand </summary> ```python # pip install accelerate from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", device_map="auto") input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> ### Running the model on a GPU using different precisions #### FP16 <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", device_map="auto", torch_dtype=torch.float16) input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> #### INT8 <details> <summary> Click to expand </summary> ```python # pip install bitsandbytes accelerate from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", device_map="auto", load_in_8bit=True) input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> # Uses ## Direct Use and Downstream Use The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that: > The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details. ## Out-of-Scope Use More information needed. # Bias, Risks, and Limitations The information below in this section are copied from the model's [official model card](https://arxiv.org/pdf/2210.11416.pdf): > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application. ## Ethical considerations and risks > Flan-T5 is fine-tuned on a large corpus of text data that was not filtered for explicit content or assessed for existing biases. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. ## Known Limitations > Flan-T5 has not been tested in real world applications. ## Sensitive Use: > Flan-T5 should not be applied for any unacceptable use cases, e.g., generation of abusive speech. # Training Details ## Training Data The model was trained on a mixture of tasks, that includes the tasks described in the table below (from the original paper, figure 2): ![table.png](https://s3.amazonaws.com/moonup/production/uploads/1666363265279-62441d1d9fdefb55a0b7d12c.png) ## Training Procedure According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf): > These models are based on pretrained T5 (Raffel et al., 2020) and fine-tuned with instructions for better zero-shot and few-shot performance. There is one fine-tuned Flan model per T5 model size. The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax). # Evaluation ## Testing Data, Factors & Metrics The authors evaluated the model on various tasks covering several languages (1836 in total). See the table below for some quantitative evaluation: ![image.png](https://s3.amazonaws.com/moonup/production/uploads/1668072995230-62441d1d9fdefb55a0b7d12c.png) For full details, please check the [research paper](https://arxiv.org/pdf/2210.11416.pdf). ## Results For full results for FLAN-T5-XL, see the [research paper](https://arxiv.org/pdf/2210.11416.pdf), Table 3. # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4. - **Hours used:** More information needed - **Cloud Provider:** GCP - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Citation **BibTeX:** ```bibtex @misc{https://doi.org/10.48550/arxiv.2210.11416, doi = {10.48550/ARXIV.2210.11416}, url = {https://arxiv.org/abs/2210.11416}, author = {Chung, Hyung Won and Hou, Le and Longpre, Shayne and Zoph, Barret and Tay, Yi and Fedus, William and Li, Eric and Wang, Xuezhi and Dehghani, Mostafa and Brahma, Siddhartha and Webson, Albert and Gu, Shixiang Shane and Dai, Zhuyun and Suzgun, Mirac and Chen, Xinyun and Chowdhery, Aakanksha and Narang, Sharan and Mishra, Gaurav and Yu, Adams and Zhao, Vincent and Huang, Yanping and Dai, Andrew and Yu, Hongkun and Petrov, Slav and Chi, Ed H. and Dean, Jeff and Devlin, Jacob and Roberts, Adam and Zhou, Denny and Le, Quoc V. and Wei, Jason}, keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Scaling Instruction-Finetuned Language Models}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
rvrtdta/roberta-base-bne-finetuned-MeIA-AnalisisDeSentimientos
rvrtdta
2023-06-28T22:22:16Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-26T18:21:10Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 model-index: - name: roberta-base-bne-finetuned-MeIA-AnalisisDeSentimientos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-MeIA-AnalisisDeSentimientos This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9624 - F1: 0.5881 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.9506 | 1.0 | 657 | 0.9264 | 0.5792 | | 0.6835 | 2.0 | 1314 | 0.9624 | 0.5881 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Sadami/roberta-base-bne-finetuned-MeIA-AnalisisDeSentimientos
Sadami
2023-06-28T22:10:06Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-27T04:01:47Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 model-index: - name: roberta-base-bne-finetuned-MeIA-AnalisisDeSentimientos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-MeIA-AnalisisDeSentimientos This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9503 - F1: 0.5905 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 18 - eval_batch_size: 18 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.9465 | 1.0 | 584 | 0.9365 | 0.5756 | | 0.704 | 2.0 | 1168 | 0.9503 | 0.5905 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Mariamtc/finetuned-twitter-roberta-base-sep2022-tweetcognition
Mariamtc
2023-06-28T22:07:15Z
103
1
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-02T17:05:52Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: finetuned-twitter-roberta-base-sep2022-tweetcognition results: [] language: - en --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-twitter-roberta-base-sep2022-tweetcognition This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sep2022](https://huggingface.co/cardiffnlp/twitter-roberta-base-sep2022) on custom dataset consisting of 2527 recent tweets related to major life events that occur during the lifespan of the users. It achieves the following results on the evaluation set: - Loss: 0.2433 - Accuracy: 0.9545 ## Model description A RoBERTa-base model trained on 168.86M tweets until the end of September 2022 (15M tweets increment) finetuned and trained on custom dataset consisting of 2527 recent tweets related to major life events that occur during the lifespan of the users with the scope of performing a specific text xlassification task: classify posts from the Twitter social media platform into a set of 30 distinct classes, each representing a major life event that the author of the post recently experienced. RoBERTa (Robustly Optimized BERT approach) is a state-of-the-art natural language processing (NLP) model developed by Facebook AI. ## Intended uses & limitations The scope of this fine-tuned language model is to be used for a specific text classification task: classify posts from the Twitter social media platform into a set of 30 distinct classes, each representing a major life event that the author of the post recently experienced. The model can be further improved by training on an even larger training dataset with an extended and more diverse set of life events classes. ## Training procedure A fine-tuning process was applied to the original model [cardiffnlp/twitter-roberta-base-sep2022](https://huggingface.co/cardiffnlp/twitter-roberta-base-sep2022) by: - trainig the original model on a custom dataset consisting of 2527 recent tweets related to major life events that occur during the lifespan of the users - setting the model's hyperparameters with the values mentioned in the table below ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0283 | 1.0 | 127 | 1.4553 | 0.8162 | | 0.9216 | 2.0 | 254 | 0.5951 | 0.8992 | | 0.4343 | 3.0 | 381 | 0.3544 | 0.9348 | | 0.2629 | 4.0 | 508 | 0.2613 | 0.9486 | | 0.1861 | 5.0 | 635 | 0.2433 | 0.9545 | ### Framework versions - Transformers 4.29.0 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
WALIDALI/bekimajic
WALIDALI
2023-06-28T21:46:07Z
32
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-28T21:34:07Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### bekimajic Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
Kokuhou/pbl
Kokuhou
2023-06-28T21:45:05Z
235
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-06-28T21:44:57Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: pbl results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9700000286102295 --- # pbl Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Cow ![Cow](images/Cow.jpg) #### Elephant ![Elephant](images/Elephant.jpg) #### Gorilla ![Gorilla](images/Gorilla.jpg) #### Hippo ![Hippo](images/Hippo.jpg) #### Lizard ![Lizard](images/Lizard.jpg) #### Monkey ![Monkey](images/Monkey.jpg) #### Panda ![Panda](images/Panda.jpg) #### Tiger ![Tiger](images/Tiger.jpg) #### Zebra ![Zebra](images/Zebra.jpg)
LanguageMachines/blip2-flan-t5-xxl
LanguageMachines
2023-06-28T21:39:54Z
9
1
transformers
[ "transformers", "pytorch", "blip-2", "visual-question-answering", "vision", "image-to-text", "image-captioning", "en", "arxiv:2301.12597", "arxiv:2210.11416", "license:mit", "endpoints_compatible", "region:us" ]
image-to-text
2023-06-28T05:45:06Z
--- language: en license: mit tags: - vision - image-to-text - image-captioning - visual-question-answering pipeline_tag: image-to-text inference: false duplicated_from: Salesforce/blip2-flan-t5-xxl --- # BLIP-2, Flan T5-xxl, pre-trained only BLIP-2 model, leveraging [Flan T5-xxl](https://huggingface.co/google/flan-t5-xxl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-FlanT5 uses off-the-shelf Flan-T5 as the language model. It inherits the same risks and limitations from [Flan-T5](https://arxiv.org/pdf/2210.11416.pdf): > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application. BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example), or refer to the snippets below depending on your usecase: #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, Blip2ForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details>
Panchovix/robin-33B-v2-SuperHOT-8k-4bit-32g
Panchovix
2023-06-28T21:39:36Z
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-28T21:18:02Z
--- license: other --- [TheBloke robin-33B-v2-fp16](https://huggingface.co/TheBloke/robin-33B-v2-fp16/tree/main) merged with kaiokendev's [33b SuperHOT 8k LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test), quantized at 4 bit. It was created with GPTQ-for-LLaMA with group size 32 and act order true as parameters, to get the maximum perplexity vs FP16 model. I HIGHLY suggest to use exllama, to evade some VRAM issues. Use (max_seq_len = context): If max_seq_len = 4096, compress_pos_emb = 2 If max_seq_len = 8192, compress_pos_emb = 4 If you have 2x24 GB VRAM GPUs cards, to not get Out of Memory errors at 8192 context, use: gpu_split: 9,21
gbellamy/poca-SoccerTwos
gbellamy
2023-06-28T21:38:59Z
6
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-06-28T21:38:47Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: gbellamy/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
prognosis/falcon7b-cardio-disease-qa-v1
prognosis
2023-06-28T21:15:59Z
0
0
null
[ "tensorboard", "generated_from_trainer", "license:apache-2.0", "region:us" ]
null
2023-06-28T09:16:23Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: falcon7b-cardio-disease-qa-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon7b-cardio-disease-qa-v1 This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 1500 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
ROBERTO1900/logos
ROBERTO1900
2023-06-28T21:09:20Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-28T21:09:20Z
--- license: creativeml-openrail-m ---
cleanrl/Pusher-v4-ddpg_continuous_action-seed1
cleanrl
2023-06-28T21:08:17Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Pusher-v4", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-28T21:08:02Z
--- tags: - Pusher-v4 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pusher-v4 type: Pusher-v4 metrics: - type: mean_reward value: -30.52 +/- 2.85 name: mean_reward verified: false --- # (CleanRL) **DDPG** Agent Playing **Pusher-v4** This is a trained model of a DDPG agent playing Pusher-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[ddpg_continuous_action]" python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action --env-id Pusher-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Pusher-v4-ddpg_continuous_action-seed1/raw/main/ddpg_continuous_action.py curl -OL https://huggingface.co/cleanrl/Pusher-v4-ddpg_continuous_action-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Pusher-v4-ddpg_continuous_action-seed1/raw/main/poetry.lock poetry install --all-extras python ddpg_continuous_action.py --track --capture-video --save-model --hf-entity cleanrl --upload-model --env-id Pusher-v4 --seed 1 ``` # Hyperparameters ```python {'batch_size': 256, 'buffer_size': 1000000, 'capture_video': True, 'cuda': True, 'env_id': 'Pusher-v4', 'exp_name': 'ddpg_continuous_action', 'exploration_noise': 0.1, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learning_rate': 0.0003, 'learning_starts': 25000.0, 'noise_clip': 0.5, 'policy_frequency': 2, 'save_model': True, 'seed': 1, 'tau': 0.005, 'torch_deterministic': True, 'total_timesteps': 1000000, 'track': True, 'upload_model': True, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
Panchovix/robin-33B-v2-fp16-SuperHOT-8k
Panchovix
2023-06-28T21:04:37Z
10
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-28T19:59:12Z
--- license: other --- [TheBloke robin-33B-v2-fp16](https://huggingface.co/TheBloke/robin-33B-v2-fp16/tree/main) merged with kaiokendev's [33b SuperHOT 8k LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test), without quant. (Full FP16 model)
NeoCodes-dev/ppo-SnowballTarget
NeoCodes-dev
2023-06-28T21:03:00Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-06-28T21:02:56Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: dergky1/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
NasimB/bert-dp-4
NasimB
2023-06-28T21:01:05Z
5
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "dataset:generator", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-06-26T01:24:27Z
--- tags: - generated_from_trainer datasets: - generator model-index: - name: bert-dp-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-dp-4 This model is a fine-tuned version of [](https://huggingface.co/) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 2.4611 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 180 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:-----:|:---------------:| | 6.3492 | 1.89 | 1000 | 5.9327 | | 5.8333 | 3.78 | 2000 | 5.8515 | | 5.7604 | 5.67 | 3000 | 5.8483 | | 5.7137 | 7.56 | 4000 | 5.7914 | | 5.6597 | 9.45 | 5000 | 5.7672 | | 5.6213 | 11.34 | 6000 | 5.7594 | | 5.5798 | 13.23 | 7000 | 5.7352 | | 5.5482 | 15.12 | 8000 | 5.7275 | | 5.513 | 17.01 | 9000 | 5.7203 | | 5.485 | 18.9 | 10000 | 5.7211 | | 5.4498 | 20.79 | 11000 | 5.6947 | | 5.4175 | 22.68 | 12000 | 5.6923 | | 5.3877 | 24.57 | 13000 | 5.6879 | | 5.3635 | 26.47 | 14000 | 5.6776 | | 5.3389 | 28.36 | 15000 | 5.6757 | | 5.3166 | 30.25 | 16000 | 5.6758 | | 5.2951 | 32.14 | 17000 | 5.6676 | | 5.2793 | 34.03 | 18000 | 5.6711 | | 5.2684 | 35.92 | 19000 | 5.6687 | | 5.2609 | 37.81 | 20000 | 5.6684 | | 5.2606 | 39.7 | 21000 | 5.6719 | | 5.2624 | 41.59 | 22000 | 5.6697 | | 5.2551 | 43.48 | 23000 | 5.6718 | | 5.2461 | 45.37 | 24000 | 5.6699 | | 5.2431 | 47.26 | 25000 | 5.6692 | | 5.2414 | 49.15 | 26000 | 5.6691 | | 5.2856 | 51.04 | 27000 | 5.6823 | | 5.2753 | 52.93 | 28000 | 5.6860 | | 5.2549 | 54.82 | 29000 | 5.6877 | | 5.2276 | 56.71 | 30000 | 5.6285 | | 5.1674 | 58.6 | 31000 | 5.5439 | | 5.0894 | 60.49 | 32000 | 5.4082 | | 4.9508 | 62.38 | 33000 | 5.1598 | | 4.7453 | 64.27 | 34000 | 4.9274 | | 4.5898 | 66.16 | 35000 | 4.7884 | | 4.4656 | 68.05 | 36000 | 4.6531 | | 4.35 | 69.94 | 37000 | 4.5123 | | 4.2378 | 71.83 | 38000 | 4.4012 | | 4.1496 | 73.72 | 39000 | 4.3240 | | 4.0891 | 75.61 | 40000 | 4.2763 | | 4.0538 | 77.5 | 41000 | 4.2520 | | 4.0448 | 79.4 | 42000 | 4.2485 | | 3.9724 | 81.29 | 43000 | 3.9940 | | 3.6527 | 83.18 | 44000 | 3.7442 | | 3.4172 | 85.07 | 45000 | 3.5713 | | 3.2446 | 86.96 | 46000 | 3.4403 | | 3.4764 | 88.85 | 47000 | 3.3796 | | 3.0543 | 90.74 | 48000 | 3.2884 | | 2.9549 | 92.63 | 49000 | 3.2107 | | 2.8785 | 94.52 | 50000 | 3.1466 | | 2.8143 | 96.41 | 51000 | 3.0788 | | 2.7605 | 98.3 | 52000 | 3.0230 | | 2.7111 | 100.19 | 53000 | 2.9802 | | 2.6727 | 102.08 | 54000 | 2.9414 | | 2.6417 | 103.97 | 55000 | 2.9167 | | 2.612 | 105.86 | 56000 | 2.8927 | | 2.5918 | 107.75 | 57000 | 2.8769 | | 2.5769 | 109.64 | 58000 | 2.8637 | | 2.566 | 111.53 | 59000 | 2.8551 | | 2.556 | 113.42 | 60000 | 2.8458 | | 2.548 | 115.31 | 61000 | 2.8488 | | 2.5468 | 117.2 | 62000 | 2.8412 | | 2.5453 | 119.09 | 63000 | 2.8383 | | 2.7567 | 120.98 | 64000 | 2.8857 | | 2.6017 | 122.87 | 65000 | 2.8382 | | 2.5416 | 124.76 | 66000 | 2.7862 | | 2.484 | 126.65 | 67000 | 2.7415 | | 2.4361 | 128.54 | 68000 | 2.7079 | | 2.3925 | 130.43 | 69000 | 2.6771 | | 2.3512 | 132.33 | 70000 | 2.6542 | | 2.3146 | 134.22 | 71000 | 2.6327 | | 2.2805 | 136.11 | 72000 | 2.6119 | | 2.2494 | 138.0 | 73000 | 2.5903 | | 2.2218 | 139.89 | 74000 | 2.5734 | | 2.1955 | 141.78 | 75000 | 2.5584 | | 2.1739 | 143.67 | 76000 | 2.5459 | | 2.154 | 145.56 | 77000 | 2.5337 | | 2.1324 | 147.45 | 78000 | 2.5260 | | 2.1149 | 149.34 | 79000 | 2.5169 | | 2.096 | 151.23 | 80000 | 2.5095 | | 2.083 | 153.12 | 81000 | 2.5045 | | 2.0666 | 155.01 | 82000 | 2.4911 | | 2.0562 | 156.9 | 83000 | 2.4907 | | 2.0437 | 158.79 | 84000 | 2.4808 | | 2.0356 | 160.68 | 85000 | 2.4816 | | 2.0317 | 162.57 | 86000 | 2.4758 | | 2.0201 | 164.46 | 87000 | 2.4724 | | 2.0138 | 166.35 | 88000 | 2.4723 | | 2.0095 | 168.24 | 89000 | 2.4651 | | 2.0056 | 170.13 | 90000 | 2.4651 | | 2.0021 | 172.02 | 91000 | 2.4616 | | 1.9974 | 173.91 | 92000 | 2.4611 | | 1.9985 | 175.8 | 93000 | 2.4613 | | 1.9954 | 177.69 | 94000 | 2.4579 | | 1.9979 | 179.58 | 95000 | 2.4611 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
mxdza/ppo-LunarLander-v2
mxdza
2023-06-28T20:58:03Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-28T20:57:42Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 240.30 +/- 20.26 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
YakovElm/MariaDB_10_BERT_Over_Sampling
YakovElm
2023-06-28T20:45:51Z
61
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-28T20:45:13Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB_10_BERT_Over_Sampling results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB_10_BERT_Over_Sampling This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0642 - Train Accuracy: 0.9813 - Validation Loss: 0.2766 - Validation Accuracy: 0.9447 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5051 | 0.7557 | 0.3875 | 0.8015 | 0 | | 0.1968 | 0.9329 | 0.2355 | 0.9422 | 1 | | 0.0642 | 0.9813 | 0.2766 | 0.9447 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
mnaylor/mega-base-wikitext
mnaylor
2023-06-28T20:32:48Z
1,615
1
transformers
[ "transformers", "pytorch", "safetensors", "mega", "fill-mask", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-02-21T20:56:10Z
--- license: apache-2.0 language: - en library_name: transformers --- # Mega Masked LM on wikitext-103 This is the location on the Hugging Face hub for the Mega MLM checkpoint. I trained this model on the `wikitext-103` dataset using standard BERT-style masked LM pretraining using the [original Mega repository](https://github.com/facebookresearch/mega) and uploaded the weights initially to hf.co/mnaylor/mega-wikitext-103. When the implementation of Mega into Hugging Face's `transformers` is finished, the weights here are designed to be used with `MegaForMaskedLM` and are compatible with the other (encoder-based) `MegaFor*` model classes. This model uses the RoBERTa base tokenizer since the Mega paper does not implement a specific tokenizer aside from the character-level tokenizer used to illustrate long-sequence performance.
LanguageMachines/blip2-opt-2.7b
LanguageMachines
2023-06-28T20:28:53Z
10
0
transformers
[ "transformers", "pytorch", "blip-2", "visual-question-answering", "vision", "image-to-text", "image-captioning", "en", "arxiv:2301.12597", "license:mit", "endpoints_compatible", "region:us" ]
image-to-text
2023-06-28T19:56:29Z
--- language: en license: mit tags: - vision - image-to-text - image-captioning - visual-question-answering pipeline_tag: image-to-text duplicated_from: Salesforce/blip2-opt-2.7b --- # BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details>
fresha/e5-large-v2-endpoint
fresha
2023-06-28T20:21:13Z
21
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "feature-extraction", "mteb", "en", "arxiv:2212.03533", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2023-06-28T18:23:10Z
--- tags: - mteb model-index: - name: e5-large-v2 results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 79.22388059701493 - type: ap value: 43.20816505595132 - type: f1 value: 73.27811303522058 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 93.748325 - type: ap value: 90.72534979701297 - type: f1 value: 93.73895874282185 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.612 - type: f1 value: 47.61157345898393 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 23.541999999999998 - type: map_at_10 value: 38.208 - type: map_at_100 value: 39.417 - type: map_at_1000 value: 39.428999999999995 - type: map_at_3 value: 33.95 - type: map_at_5 value: 36.329 - type: mrr_at_1 value: 23.755000000000003 - type: mrr_at_10 value: 38.288 - type: mrr_at_100 value: 39.511 - type: mrr_at_1000 value: 39.523 - type: mrr_at_3 value: 34.009 - type: mrr_at_5 value: 36.434 - type: ndcg_at_1 value: 23.541999999999998 - type: ndcg_at_10 value: 46.417 - type: ndcg_at_100 value: 51.812000000000005 - type: ndcg_at_1000 value: 52.137 - type: ndcg_at_3 value: 37.528 - type: ndcg_at_5 value: 41.81 - type: precision_at_1 value: 23.541999999999998 - type: precision_at_10 value: 7.269 - type: precision_at_100 value: 0.9690000000000001 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 15.979 - type: precision_at_5 value: 11.664 - type: recall_at_1 value: 23.541999999999998 - type: recall_at_10 value: 72.688 - type: recall_at_100 value: 96.871 - type: recall_at_1000 value: 99.431 - type: recall_at_3 value: 47.937000000000005 - type: recall_at_5 value: 58.321 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 45.546499570522094 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 41.01607489943561 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 59.616107510107774 - type: mrr value: 72.75106626214661 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 84.33018094733868 - type: cos_sim_spearman value: 83.60190492611737 - type: euclidean_pearson value: 82.1492450218961 - type: euclidean_spearman value: 82.70308926526991 - type: manhattan_pearson value: 81.93959600076842 - type: manhattan_spearman value: 82.73260801016369 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.54545454545455 - type: f1 value: 84.49582530928923 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 37.362725540120096 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 34.849509608178145 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 31.502999999999997 - type: map_at_10 value: 43.323 - type: map_at_100 value: 44.708999999999996 - type: map_at_1000 value: 44.838 - type: map_at_3 value: 38.987 - type: map_at_5 value: 41.516999999999996 - type: mrr_at_1 value: 38.769999999999996 - type: mrr_at_10 value: 49.13 - type: mrr_at_100 value: 49.697 - type: mrr_at_1000 value: 49.741 - type: mrr_at_3 value: 45.804 - type: mrr_at_5 value: 47.842 - type: ndcg_at_1 value: 38.769999999999996 - type: ndcg_at_10 value: 50.266999999999996 - type: ndcg_at_100 value: 54.967 - type: ndcg_at_1000 value: 56.976000000000006 - type: ndcg_at_3 value: 43.823 - type: ndcg_at_5 value: 47.12 - type: precision_at_1 value: 38.769999999999996 - type: precision_at_10 value: 10.057 - type: precision_at_100 value: 1.554 - type: precision_at_1000 value: 0.202 - type: precision_at_3 value: 21.125 - type: precision_at_5 value: 15.851 - type: recall_at_1 value: 31.502999999999997 - type: recall_at_10 value: 63.715999999999994 - type: recall_at_100 value: 83.61800000000001 - type: recall_at_1000 value: 96.63199999999999 - type: recall_at_3 value: 45.403 - type: recall_at_5 value: 54.481 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.833000000000002 - type: map_at_10 value: 37.330999999999996 - type: map_at_100 value: 38.580999999999996 - type: map_at_1000 value: 38.708 - type: map_at_3 value: 34.713 - type: map_at_5 value: 36.104 - type: mrr_at_1 value: 35.223 - type: mrr_at_10 value: 43.419000000000004 - type: mrr_at_100 value: 44.198 - type: mrr_at_1000 value: 44.249 - type: mrr_at_3 value: 41.614000000000004 - type: mrr_at_5 value: 42.553000000000004 - type: ndcg_at_1 value: 35.223 - type: ndcg_at_10 value: 42.687999999999995 - type: ndcg_at_100 value: 47.447 - type: ndcg_at_1000 value: 49.701 - type: ndcg_at_3 value: 39.162 - type: ndcg_at_5 value: 40.557 - type: precision_at_1 value: 35.223 - type: precision_at_10 value: 7.962 - type: precision_at_100 value: 1.304 - type: precision_at_1000 value: 0.18 - type: precision_at_3 value: 19.023 - type: precision_at_5 value: 13.184999999999999 - type: recall_at_1 value: 27.833000000000002 - type: recall_at_10 value: 51.881 - type: recall_at_100 value: 72.04 - type: recall_at_1000 value: 86.644 - type: recall_at_3 value: 40.778 - type: recall_at_5 value: 45.176 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 38.175 - type: map_at_10 value: 51.174 - type: map_at_100 value: 52.26499999999999 - type: map_at_1000 value: 52.315999999999995 - type: map_at_3 value: 47.897 - type: map_at_5 value: 49.703 - type: mrr_at_1 value: 43.448 - type: mrr_at_10 value: 54.505 - type: mrr_at_100 value: 55.216 - type: mrr_at_1000 value: 55.242000000000004 - type: mrr_at_3 value: 51.98500000000001 - type: mrr_at_5 value: 53.434000000000005 - type: ndcg_at_1 value: 43.448 - type: ndcg_at_10 value: 57.282 - type: ndcg_at_100 value: 61.537 - type: ndcg_at_1000 value: 62.546 - type: ndcg_at_3 value: 51.73799999999999 - type: ndcg_at_5 value: 54.324 - type: precision_at_1 value: 43.448 - type: precision_at_10 value: 9.292 - type: precision_at_100 value: 1.233 - type: precision_at_1000 value: 0.136 - type: precision_at_3 value: 23.218 - type: precision_at_5 value: 15.887 - type: recall_at_1 value: 38.175 - type: recall_at_10 value: 72.00999999999999 - type: recall_at_100 value: 90.155 - type: recall_at_1000 value: 97.257 - type: recall_at_3 value: 57.133 - type: recall_at_5 value: 63.424 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.405 - type: map_at_10 value: 30.043 - type: map_at_100 value: 31.191000000000003 - type: map_at_1000 value: 31.275 - type: map_at_3 value: 27.034000000000002 - type: map_at_5 value: 28.688000000000002 - type: mrr_at_1 value: 24.068 - type: mrr_at_10 value: 31.993 - type: mrr_at_100 value: 32.992 - type: mrr_at_1000 value: 33.050000000000004 - type: mrr_at_3 value: 28.964000000000002 - type: mrr_at_5 value: 30.653000000000002 - type: ndcg_at_1 value: 24.068 - type: ndcg_at_10 value: 35.198 - type: ndcg_at_100 value: 40.709 - type: ndcg_at_1000 value: 42.855 - type: ndcg_at_3 value: 29.139 - type: ndcg_at_5 value: 32.045 - type: precision_at_1 value: 24.068 - type: precision_at_10 value: 5.65 - type: precision_at_100 value: 0.885 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 12.279 - type: precision_at_5 value: 8.994 - type: recall_at_1 value: 22.405 - type: recall_at_10 value: 49.391 - type: recall_at_100 value: 74.53699999999999 - type: recall_at_1000 value: 90.605 - type: recall_at_3 value: 33.126 - type: recall_at_5 value: 40.073 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 13.309999999999999 - type: map_at_10 value: 20.688000000000002 - type: map_at_100 value: 22.022 - type: map_at_1000 value: 22.152 - type: map_at_3 value: 17.954 - type: map_at_5 value: 19.439 - type: mrr_at_1 value: 16.294 - type: mrr_at_10 value: 24.479 - type: mrr_at_100 value: 25.515 - type: mrr_at_1000 value: 25.593 - type: mrr_at_3 value: 21.642 - type: mrr_at_5 value: 23.189999999999998 - type: ndcg_at_1 value: 16.294 - type: ndcg_at_10 value: 25.833000000000002 - type: ndcg_at_100 value: 32.074999999999996 - type: ndcg_at_1000 value: 35.083 - type: ndcg_at_3 value: 20.493 - type: ndcg_at_5 value: 22.949 - type: precision_at_1 value: 16.294 - type: precision_at_10 value: 5.112 - type: precision_at_100 value: 0.96 - type: precision_at_1000 value: 0.134 - type: precision_at_3 value: 9.908999999999999 - type: precision_at_5 value: 7.587000000000001 - type: recall_at_1 value: 13.309999999999999 - type: recall_at_10 value: 37.851 - type: recall_at_100 value: 64.835 - type: recall_at_1000 value: 86.334 - type: recall_at_3 value: 23.493 - type: recall_at_5 value: 29.528 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.857999999999997 - type: map_at_10 value: 35.503 - type: map_at_100 value: 36.957 - type: map_at_1000 value: 37.065 - type: map_at_3 value: 32.275999999999996 - type: map_at_5 value: 34.119 - type: mrr_at_1 value: 31.954 - type: mrr_at_10 value: 40.851 - type: mrr_at_100 value: 41.863 - type: mrr_at_1000 value: 41.900999999999996 - type: mrr_at_3 value: 38.129999999999995 - type: mrr_at_5 value: 39.737 - type: ndcg_at_1 value: 31.954 - type: ndcg_at_10 value: 41.343999999999994 - type: ndcg_at_100 value: 47.397 - type: ndcg_at_1000 value: 49.501 - type: ndcg_at_3 value: 36.047000000000004 - type: ndcg_at_5 value: 38.639 - type: precision_at_1 value: 31.954 - type: precision_at_10 value: 7.68 - type: precision_at_100 value: 1.247 - type: precision_at_1000 value: 0.16199999999999998 - type: precision_at_3 value: 17.132 - type: precision_at_5 value: 12.589 - type: recall_at_1 value: 25.857999999999997 - type: recall_at_10 value: 53.43599999999999 - type: recall_at_100 value: 78.82400000000001 - type: recall_at_1000 value: 92.78999999999999 - type: recall_at_3 value: 38.655 - type: recall_at_5 value: 45.216 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.709 - type: map_at_10 value: 34.318 - type: map_at_100 value: 35.657 - type: map_at_1000 value: 35.783 - type: map_at_3 value: 31.326999999999998 - type: map_at_5 value: 33.021 - type: mrr_at_1 value: 30.137000000000004 - type: mrr_at_10 value: 39.093 - type: mrr_at_100 value: 39.992 - type: mrr_at_1000 value: 40.056999999999995 - type: mrr_at_3 value: 36.606 - type: mrr_at_5 value: 37.861 - type: ndcg_at_1 value: 30.137000000000004 - type: ndcg_at_10 value: 39.974 - type: ndcg_at_100 value: 45.647999999999996 - type: ndcg_at_1000 value: 48.259 - type: ndcg_at_3 value: 35.028 - type: ndcg_at_5 value: 37.175999999999995 - type: precision_at_1 value: 30.137000000000004 - type: precision_at_10 value: 7.363 - type: precision_at_100 value: 1.184 - type: precision_at_1000 value: 0.161 - type: precision_at_3 value: 16.857 - type: precision_at_5 value: 11.963 - type: recall_at_1 value: 24.709 - type: recall_at_10 value: 52.087 - type: recall_at_100 value: 76.125 - type: recall_at_1000 value: 93.82300000000001 - type: recall_at_3 value: 38.149 - type: recall_at_5 value: 43.984 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.40791666666667 - type: map_at_10 value: 32.458083333333335 - type: map_at_100 value: 33.691916666666664 - type: map_at_1000 value: 33.81191666666666 - type: map_at_3 value: 29.51625 - type: map_at_5 value: 31.168083333333335 - type: mrr_at_1 value: 27.96591666666666 - type: mrr_at_10 value: 36.528583333333344 - type: mrr_at_100 value: 37.404 - type: mrr_at_1000 value: 37.464333333333336 - type: mrr_at_3 value: 33.92883333333333 - type: mrr_at_5 value: 35.41933333333333 - type: ndcg_at_1 value: 27.96591666666666 - type: ndcg_at_10 value: 37.89141666666666 - type: ndcg_at_100 value: 43.23066666666666 - type: ndcg_at_1000 value: 45.63258333333333 - type: ndcg_at_3 value: 32.811249999999994 - type: ndcg_at_5 value: 35.22566666666667 - type: precision_at_1 value: 27.96591666666666 - type: precision_at_10 value: 6.834083333333332 - type: precision_at_100 value: 1.12225 - type: precision_at_1000 value: 0.15241666666666667 - type: precision_at_3 value: 15.264333333333335 - type: precision_at_5 value: 11.039416666666666 - type: recall_at_1 value: 23.40791666666667 - type: recall_at_10 value: 49.927083333333336 - type: recall_at_100 value: 73.44641666666668 - type: recall_at_1000 value: 90.19950000000001 - type: recall_at_3 value: 35.88341666666667 - type: recall_at_5 value: 42.061249999999994 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 19.592000000000002 - type: map_at_10 value: 26.895999999999997 - type: map_at_100 value: 27.921000000000003 - type: map_at_1000 value: 28.02 - type: map_at_3 value: 24.883 - type: map_at_5 value: 25.812 - type: mrr_at_1 value: 22.698999999999998 - type: mrr_at_10 value: 29.520999999999997 - type: mrr_at_100 value: 30.458000000000002 - type: mrr_at_1000 value: 30.526999999999997 - type: mrr_at_3 value: 27.633000000000003 - type: mrr_at_5 value: 28.483999999999998 - type: ndcg_at_1 value: 22.698999999999998 - type: ndcg_at_10 value: 31.061 - type: ndcg_at_100 value: 36.398 - type: ndcg_at_1000 value: 38.89 - type: ndcg_at_3 value: 27.149 - type: ndcg_at_5 value: 28.627000000000002 - type: precision_at_1 value: 22.698999999999998 - type: precision_at_10 value: 5.106999999999999 - type: precision_at_100 value: 0.857 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 11.963 - type: precision_at_5 value: 8.221 - type: recall_at_1 value: 19.592000000000002 - type: recall_at_10 value: 41.329 - type: recall_at_100 value: 66.094 - type: recall_at_1000 value: 84.511 - type: recall_at_3 value: 30.61 - type: recall_at_5 value: 34.213 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 14.71 - type: map_at_10 value: 20.965 - type: map_at_100 value: 21.994 - type: map_at_1000 value: 22.133 - type: map_at_3 value: 18.741 - type: map_at_5 value: 19.951 - type: mrr_at_1 value: 18.307000000000002 - type: mrr_at_10 value: 24.66 - type: mrr_at_100 value: 25.540000000000003 - type: mrr_at_1000 value: 25.629 - type: mrr_at_3 value: 22.511 - type: mrr_at_5 value: 23.72 - type: ndcg_at_1 value: 18.307000000000002 - type: ndcg_at_10 value: 25.153 - type: ndcg_at_100 value: 30.229 - type: ndcg_at_1000 value: 33.623 - type: ndcg_at_3 value: 21.203 - type: ndcg_at_5 value: 23.006999999999998 - type: precision_at_1 value: 18.307000000000002 - type: precision_at_10 value: 4.725 - type: precision_at_100 value: 0.8659999999999999 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 10.14 - type: precision_at_5 value: 7.481 - type: recall_at_1 value: 14.71 - type: recall_at_10 value: 34.087 - type: recall_at_100 value: 57.147999999999996 - type: recall_at_1000 value: 81.777 - type: recall_at_3 value: 22.996 - type: recall_at_5 value: 27.73 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.472 - type: map_at_10 value: 32.699 - type: map_at_100 value: 33.867000000000004 - type: map_at_1000 value: 33.967000000000006 - type: map_at_3 value: 29.718 - type: map_at_5 value: 31.345 - type: mrr_at_1 value: 28.265 - type: mrr_at_10 value: 36.945 - type: mrr_at_100 value: 37.794 - type: mrr_at_1000 value: 37.857 - type: mrr_at_3 value: 34.266000000000005 - type: mrr_at_5 value: 35.768 - type: ndcg_at_1 value: 28.265 - type: ndcg_at_10 value: 38.35 - type: ndcg_at_100 value: 43.739 - type: ndcg_at_1000 value: 46.087 - type: ndcg_at_3 value: 33.004 - type: ndcg_at_5 value: 35.411 - type: precision_at_1 value: 28.265 - type: precision_at_10 value: 6.715999999999999 - type: precision_at_100 value: 1.059 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 15.299 - type: precision_at_5 value: 10.951 - type: recall_at_1 value: 23.472 - type: recall_at_10 value: 51.413 - type: recall_at_100 value: 75.17 - type: recall_at_1000 value: 91.577 - type: recall_at_3 value: 36.651 - type: recall_at_5 value: 42.814 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.666 - type: map_at_10 value: 32.963 - type: map_at_100 value: 34.544999999999995 - type: map_at_1000 value: 34.792 - type: map_at_3 value: 29.74 - type: map_at_5 value: 31.5 - type: mrr_at_1 value: 29.051 - type: mrr_at_10 value: 38.013000000000005 - type: mrr_at_100 value: 38.997 - type: mrr_at_1000 value: 39.055 - type: mrr_at_3 value: 34.947 - type: mrr_at_5 value: 36.815 - type: ndcg_at_1 value: 29.051 - type: ndcg_at_10 value: 39.361000000000004 - type: ndcg_at_100 value: 45.186 - type: ndcg_at_1000 value: 47.867 - type: ndcg_at_3 value: 33.797 - type: ndcg_at_5 value: 36.456 - type: precision_at_1 value: 29.051 - type: precision_at_10 value: 7.668 - type: precision_at_100 value: 1.532 - type: precision_at_1000 value: 0.247 - type: precision_at_3 value: 15.876000000000001 - type: precision_at_5 value: 11.779 - type: recall_at_1 value: 23.666 - type: recall_at_10 value: 51.858000000000004 - type: recall_at_100 value: 77.805 - type: recall_at_1000 value: 94.504 - type: recall_at_3 value: 36.207 - type: recall_at_5 value: 43.094 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 15.662 - type: map_at_10 value: 23.594 - type: map_at_100 value: 24.593999999999998 - type: map_at_1000 value: 24.694 - type: map_at_3 value: 20.925 - type: map_at_5 value: 22.817999999999998 - type: mrr_at_1 value: 17.375 - type: mrr_at_10 value: 25.734 - type: mrr_at_100 value: 26.586 - type: mrr_at_1000 value: 26.671 - type: mrr_at_3 value: 23.044 - type: mrr_at_5 value: 24.975 - type: ndcg_at_1 value: 17.375 - type: ndcg_at_10 value: 28.186 - type: ndcg_at_100 value: 33.436 - type: ndcg_at_1000 value: 36.203 - type: ndcg_at_3 value: 23.152 - type: ndcg_at_5 value: 26.397 - type: precision_at_1 value: 17.375 - type: precision_at_10 value: 4.677 - type: precision_at_100 value: 0.786 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 10.351 - type: precision_at_5 value: 7.985 - type: recall_at_1 value: 15.662 - type: recall_at_10 value: 40.066 - type: recall_at_100 value: 65.006 - type: recall_at_1000 value: 85.94000000000001 - type: recall_at_3 value: 27.400000000000002 - type: recall_at_5 value: 35.002 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 8.853 - type: map_at_10 value: 15.568000000000001 - type: map_at_100 value: 17.383000000000003 - type: map_at_1000 value: 17.584 - type: map_at_3 value: 12.561 - type: map_at_5 value: 14.056 - type: mrr_at_1 value: 18.958 - type: mrr_at_10 value: 28.288000000000004 - type: mrr_at_100 value: 29.432000000000002 - type: mrr_at_1000 value: 29.498 - type: mrr_at_3 value: 25.049 - type: mrr_at_5 value: 26.857 - type: ndcg_at_1 value: 18.958 - type: ndcg_at_10 value: 22.21 - type: ndcg_at_100 value: 29.596 - type: ndcg_at_1000 value: 33.583 - type: ndcg_at_3 value: 16.994999999999997 - type: ndcg_at_5 value: 18.95 - type: precision_at_1 value: 18.958 - type: precision_at_10 value: 7.192 - type: precision_at_100 value: 1.5 - type: precision_at_1000 value: 0.22399999999999998 - type: precision_at_3 value: 12.573 - type: precision_at_5 value: 10.202 - type: recall_at_1 value: 8.853 - type: recall_at_10 value: 28.087 - type: recall_at_100 value: 53.701 - type: recall_at_1000 value: 76.29899999999999 - type: recall_at_3 value: 15.913 - type: recall_at_5 value: 20.658 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 9.077 - type: map_at_10 value: 20.788999999999998 - type: map_at_100 value: 30.429000000000002 - type: map_at_1000 value: 32.143 - type: map_at_3 value: 14.692 - type: map_at_5 value: 17.139 - type: mrr_at_1 value: 70.75 - type: mrr_at_10 value: 78.036 - type: mrr_at_100 value: 78.401 - type: mrr_at_1000 value: 78.404 - type: mrr_at_3 value: 76.75 - type: mrr_at_5 value: 77.47500000000001 - type: ndcg_at_1 value: 58.12500000000001 - type: ndcg_at_10 value: 44.015 - type: ndcg_at_100 value: 49.247 - type: ndcg_at_1000 value: 56.211999999999996 - type: ndcg_at_3 value: 49.151 - type: ndcg_at_5 value: 46.195 - type: precision_at_1 value: 70.75 - type: precision_at_10 value: 35.5 - type: precision_at_100 value: 11.355 - type: precision_at_1000 value: 2.1950000000000003 - type: precision_at_3 value: 53.083000000000006 - type: precision_at_5 value: 44.800000000000004 - type: recall_at_1 value: 9.077 - type: recall_at_10 value: 26.259 - type: recall_at_100 value: 56.547000000000004 - type: recall_at_1000 value: 78.551 - type: recall_at_3 value: 16.162000000000003 - type: recall_at_5 value: 19.753999999999998 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 49.44500000000001 - type: f1 value: 44.67067691783401 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 68.182 - type: map_at_10 value: 78.223 - type: map_at_100 value: 78.498 - type: map_at_1000 value: 78.512 - type: map_at_3 value: 76.71 - type: map_at_5 value: 77.725 - type: mrr_at_1 value: 73.177 - type: mrr_at_10 value: 82.513 - type: mrr_at_100 value: 82.633 - type: mrr_at_1000 value: 82.635 - type: mrr_at_3 value: 81.376 - type: mrr_at_5 value: 82.182 - type: ndcg_at_1 value: 73.177 - type: ndcg_at_10 value: 82.829 - type: ndcg_at_100 value: 83.84 - type: ndcg_at_1000 value: 84.07900000000001 - type: ndcg_at_3 value: 80.303 - type: ndcg_at_5 value: 81.846 - type: precision_at_1 value: 73.177 - type: precision_at_10 value: 10.241999999999999 - type: precision_at_100 value: 1.099 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 31.247999999999998 - type: precision_at_5 value: 19.697 - type: recall_at_1 value: 68.182 - type: recall_at_10 value: 92.657 - type: recall_at_100 value: 96.709 - type: recall_at_1000 value: 98.184 - type: recall_at_3 value: 85.9 - type: recall_at_5 value: 89.755 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 21.108 - type: map_at_10 value: 33.342 - type: map_at_100 value: 35.281 - type: map_at_1000 value: 35.478 - type: map_at_3 value: 29.067 - type: map_at_5 value: 31.563000000000002 - type: mrr_at_1 value: 41.667 - type: mrr_at_10 value: 49.913000000000004 - type: mrr_at_100 value: 50.724000000000004 - type: mrr_at_1000 value: 50.766 - type: mrr_at_3 value: 47.504999999999995 - type: mrr_at_5 value: 49.033 - type: ndcg_at_1 value: 41.667 - type: ndcg_at_10 value: 41.144 - type: ndcg_at_100 value: 48.326 - type: ndcg_at_1000 value: 51.486 - type: ndcg_at_3 value: 37.486999999999995 - type: ndcg_at_5 value: 38.78 - type: precision_at_1 value: 41.667 - type: precision_at_10 value: 11.358 - type: precision_at_100 value: 1.873 - type: precision_at_1000 value: 0.244 - type: precision_at_3 value: 25 - type: precision_at_5 value: 18.519 - type: recall_at_1 value: 21.108 - type: recall_at_10 value: 47.249 - type: recall_at_100 value: 74.52 - type: recall_at_1000 value: 93.31 - type: recall_at_3 value: 33.271 - type: recall_at_5 value: 39.723000000000006 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 40.317 - type: map_at_10 value: 64.861 - type: map_at_100 value: 65.697 - type: map_at_1000 value: 65.755 - type: map_at_3 value: 61.258 - type: map_at_5 value: 63.590999999999994 - type: mrr_at_1 value: 80.635 - type: mrr_at_10 value: 86.528 - type: mrr_at_100 value: 86.66199999999999 - type: mrr_at_1000 value: 86.666 - type: mrr_at_3 value: 85.744 - type: mrr_at_5 value: 86.24300000000001 - type: ndcg_at_1 value: 80.635 - type: ndcg_at_10 value: 73.13199999999999 - type: ndcg_at_100 value: 75.927 - type: ndcg_at_1000 value: 76.976 - type: ndcg_at_3 value: 68.241 - type: ndcg_at_5 value: 71.071 - type: precision_at_1 value: 80.635 - type: precision_at_10 value: 15.326 - type: precision_at_100 value: 1.7500000000000002 - type: precision_at_1000 value: 0.189 - type: precision_at_3 value: 43.961 - type: precision_at_5 value: 28.599999999999998 - type: recall_at_1 value: 40.317 - type: recall_at_10 value: 76.631 - type: recall_at_100 value: 87.495 - type: recall_at_1000 value: 94.362 - type: recall_at_3 value: 65.94200000000001 - type: recall_at_5 value: 71.499 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 91.686 - type: ap value: 87.5577120393173 - type: f1 value: 91.6629447355139 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 23.702 - type: map_at_10 value: 36.414 - type: map_at_100 value: 37.561 - type: map_at_1000 value: 37.605 - type: map_at_3 value: 32.456 - type: map_at_5 value: 34.827000000000005 - type: mrr_at_1 value: 24.355 - type: mrr_at_10 value: 37.01 - type: mrr_at_100 value: 38.085 - type: mrr_at_1000 value: 38.123000000000005 - type: mrr_at_3 value: 33.117999999999995 - type: mrr_at_5 value: 35.452 - type: ndcg_at_1 value: 24.384 - type: ndcg_at_10 value: 43.456 - type: ndcg_at_100 value: 48.892 - type: ndcg_at_1000 value: 49.964 - type: ndcg_at_3 value: 35.475 - type: ndcg_at_5 value: 39.711 - type: precision_at_1 value: 24.384 - type: precision_at_10 value: 6.7940000000000005 - type: precision_at_100 value: 0.951 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 15.052999999999999 - type: precision_at_5 value: 11.189 - type: recall_at_1 value: 23.702 - type: recall_at_10 value: 65.057 - type: recall_at_100 value: 90.021 - type: recall_at_1000 value: 98.142 - type: recall_at_3 value: 43.551 - type: recall_at_5 value: 53.738 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 94.62380300957591 - type: f1 value: 94.49871222100734 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 77.14090287277702 - type: f1 value: 60.32101258220515 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.84330867518494 - type: f1 value: 71.92248688515255 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 78.10692669804976 - type: f1 value: 77.9904839122866 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.822988923078444 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 30.38394880253403 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.82504612539082 - type: mrr value: 32.84462298174977 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.029 - type: map_at_10 value: 14.088999999999999 - type: map_at_100 value: 17.601 - type: map_at_1000 value: 19.144 - type: map_at_3 value: 10.156 - type: map_at_5 value: 11.892 - type: mrr_at_1 value: 46.44 - type: mrr_at_10 value: 56.596999999999994 - type: mrr_at_100 value: 57.11000000000001 - type: mrr_at_1000 value: 57.14 - type: mrr_at_3 value: 54.334 - type: mrr_at_5 value: 55.774 - type: ndcg_at_1 value: 44.891999999999996 - type: ndcg_at_10 value: 37.134 - type: ndcg_at_100 value: 33.652 - type: ndcg_at_1000 value: 42.548 - type: ndcg_at_3 value: 41.851 - type: ndcg_at_5 value: 39.842 - type: precision_at_1 value: 46.44 - type: precision_at_10 value: 27.647 - type: precision_at_100 value: 8.309999999999999 - type: precision_at_1000 value: 2.146 - type: precision_at_3 value: 39.422000000000004 - type: precision_at_5 value: 34.675 - type: recall_at_1 value: 6.029 - type: recall_at_10 value: 18.907 - type: recall_at_100 value: 33.76 - type: recall_at_1000 value: 65.14999999999999 - type: recall_at_3 value: 11.584999999999999 - type: recall_at_5 value: 14.626 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 39.373000000000005 - type: map_at_10 value: 55.836 - type: map_at_100 value: 56.611999999999995 - type: map_at_1000 value: 56.63 - type: map_at_3 value: 51.747 - type: map_at_5 value: 54.337999999999994 - type: mrr_at_1 value: 44.147999999999996 - type: mrr_at_10 value: 58.42699999999999 - type: mrr_at_100 value: 58.902 - type: mrr_at_1000 value: 58.914 - type: mrr_at_3 value: 55.156000000000006 - type: mrr_at_5 value: 57.291000000000004 - type: ndcg_at_1 value: 44.119 - type: ndcg_at_10 value: 63.444 - type: ndcg_at_100 value: 66.40599999999999 - type: ndcg_at_1000 value: 66.822 - type: ndcg_at_3 value: 55.962 - type: ndcg_at_5 value: 60.228 - type: precision_at_1 value: 44.119 - type: precision_at_10 value: 10.006 - type: precision_at_100 value: 1.17 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 25.135 - type: precision_at_5 value: 17.59 - type: recall_at_1 value: 39.373000000000005 - type: recall_at_10 value: 83.78999999999999 - type: recall_at_100 value: 96.246 - type: recall_at_1000 value: 99.324 - type: recall_at_3 value: 64.71900000000001 - type: recall_at_5 value: 74.508 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 69.199 - type: map_at_10 value: 82.892 - type: map_at_100 value: 83.578 - type: map_at_1000 value: 83.598 - type: map_at_3 value: 79.948 - type: map_at_5 value: 81.779 - type: mrr_at_1 value: 79.67 - type: mrr_at_10 value: 86.115 - type: mrr_at_100 value: 86.249 - type: mrr_at_1000 value: 86.251 - type: mrr_at_3 value: 85.08200000000001 - type: mrr_at_5 value: 85.783 - type: ndcg_at_1 value: 79.67 - type: ndcg_at_10 value: 86.839 - type: ndcg_at_100 value: 88.252 - type: ndcg_at_1000 value: 88.401 - type: ndcg_at_3 value: 83.86200000000001 - type: ndcg_at_5 value: 85.473 - type: precision_at_1 value: 79.67 - type: precision_at_10 value: 13.19 - type: precision_at_100 value: 1.521 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 36.677 - type: precision_at_5 value: 24.118000000000002 - type: recall_at_1 value: 69.199 - type: recall_at_10 value: 94.321 - type: recall_at_100 value: 99.20400000000001 - type: recall_at_1000 value: 99.947 - type: recall_at_3 value: 85.787 - type: recall_at_5 value: 90.365 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 55.82810046856353 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 63.38132611783628 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 5.127000000000001 - type: map_at_10 value: 12.235 - type: map_at_100 value: 14.417 - type: map_at_1000 value: 14.75 - type: map_at_3 value: 8.906 - type: map_at_5 value: 10.591000000000001 - type: mrr_at_1 value: 25.2 - type: mrr_at_10 value: 35.879 - type: mrr_at_100 value: 36.935 - type: mrr_at_1000 value: 36.997 - type: mrr_at_3 value: 32.783 - type: mrr_at_5 value: 34.367999999999995 - type: ndcg_at_1 value: 25.2 - type: ndcg_at_10 value: 20.509 - type: ndcg_at_100 value: 28.67 - type: ndcg_at_1000 value: 34.42 - type: ndcg_at_3 value: 19.948 - type: ndcg_at_5 value: 17.166 - type: precision_at_1 value: 25.2 - type: precision_at_10 value: 10.440000000000001 - type: precision_at_100 value: 2.214 - type: precision_at_1000 value: 0.359 - type: precision_at_3 value: 18.533 - type: precision_at_5 value: 14.860000000000001 - type: recall_at_1 value: 5.127000000000001 - type: recall_at_10 value: 21.147 - type: recall_at_100 value: 44.946999999999996 - type: recall_at_1000 value: 72.89 - type: recall_at_3 value: 11.277 - type: recall_at_5 value: 15.042 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.0373011786213 - type: cos_sim_spearman value: 79.27889560856613 - type: euclidean_pearson value: 80.31186315495655 - type: euclidean_spearman value: 79.41630415280811 - type: manhattan_pearson value: 80.31755140442013 - type: manhattan_spearman value: 79.43069870027611 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.8659751342045 - type: cos_sim_spearman value: 76.95377612997667 - type: euclidean_pearson value: 81.24552945497848 - type: euclidean_spearman value: 77.18236963555253 - type: manhattan_pearson value: 81.26477607759037 - type: manhattan_spearman value: 77.13821753062756 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 83.34597139044875 - type: cos_sim_spearman value: 84.124169425592 - type: euclidean_pearson value: 83.68590721511401 - type: euclidean_spearman value: 84.18846190846398 - type: manhattan_pearson value: 83.57630235061498 - type: manhattan_spearman value: 84.10244043726902 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 82.67641885599572 - type: cos_sim_spearman value: 80.46450725650428 - type: euclidean_pearson value: 81.61645042715865 - type: euclidean_spearman value: 80.61418394236874 - type: manhattan_pearson value: 81.55712034928871 - type: manhattan_spearman value: 80.57905670523951 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.86650310886782 - type: cos_sim_spearman value: 89.76081629222328 - type: euclidean_pearson value: 89.1530747029954 - type: euclidean_spearman value: 89.80990657280248 - type: manhattan_pearson value: 89.10640563278132 - type: manhattan_spearman value: 89.76282108434047 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 83.93864027911118 - type: cos_sim_spearman value: 85.47096193999023 - type: euclidean_pearson value: 85.03141840870533 - type: euclidean_spearman value: 85.43124029598181 - type: manhattan_pearson value: 84.99002664393512 - type: manhattan_spearman value: 85.39169195120834 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.7045343749832 - type: cos_sim_spearman value: 89.03262221146677 - type: euclidean_pearson value: 89.56078218264365 - type: euclidean_spearman value: 89.17827006466868 - type: manhattan_pearson value: 89.52717595468582 - type: manhattan_spearman value: 89.15878115952923 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 64.20191302875551 - type: cos_sim_spearman value: 64.11446552557646 - type: euclidean_pearson value: 64.6918197393619 - type: euclidean_spearman value: 63.440182631197764 - type: manhattan_pearson value: 64.55692904121835 - type: manhattan_spearman value: 63.424877742756266 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 86.37793104662344 - type: cos_sim_spearman value: 87.7357802629067 - type: euclidean_pearson value: 87.4286301545109 - type: euclidean_spearman value: 87.78452920777421 - type: manhattan_pearson value: 87.42445169331255 - type: manhattan_spearman value: 87.78537677249598 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 84.31465405081792 - type: mrr value: 95.7173781193389 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 57.760999999999996 - type: map_at_10 value: 67.904 - type: map_at_100 value: 68.539 - type: map_at_1000 value: 68.562 - type: map_at_3 value: 65.415 - type: map_at_5 value: 66.788 - type: mrr_at_1 value: 60.333000000000006 - type: mrr_at_10 value: 68.797 - type: mrr_at_100 value: 69.236 - type: mrr_at_1000 value: 69.257 - type: mrr_at_3 value: 66.667 - type: mrr_at_5 value: 67.967 - type: ndcg_at_1 value: 60.333000000000006 - type: ndcg_at_10 value: 72.24199999999999 - type: ndcg_at_100 value: 74.86 - type: ndcg_at_1000 value: 75.354 - type: ndcg_at_3 value: 67.93400000000001 - type: ndcg_at_5 value: 70.02199999999999 - type: precision_at_1 value: 60.333000000000006 - type: precision_at_10 value: 9.533 - type: precision_at_100 value: 1.09 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.778000000000002 - type: precision_at_5 value: 17.467 - type: recall_at_1 value: 57.760999999999996 - type: recall_at_10 value: 84.383 - type: recall_at_100 value: 96.267 - type: recall_at_1000 value: 100 - type: recall_at_3 value: 72.628 - type: recall_at_5 value: 78.094 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.8029702970297 - type: cos_sim_ap value: 94.9210324173411 - type: cos_sim_f1 value: 89.8521162672106 - type: cos_sim_precision value: 91.67533818938605 - type: cos_sim_recall value: 88.1 - type: dot_accuracy value: 99.69504950495049 - type: dot_ap value: 90.4919719146181 - type: dot_f1 value: 84.72289156626506 - type: dot_precision value: 81.76744186046511 - type: dot_recall value: 87.9 - type: euclidean_accuracy value: 99.79702970297029 - type: euclidean_ap value: 94.87827463795753 - type: euclidean_f1 value: 89.55680081507896 - type: euclidean_precision value: 91.27725856697819 - type: euclidean_recall value: 87.9 - type: manhattan_accuracy value: 99.7990099009901 - type: manhattan_ap value: 94.87587025149682 - type: manhattan_f1 value: 89.76298537569339 - type: manhattan_precision value: 90.53916581892166 - type: manhattan_recall value: 89 - type: max_accuracy value: 99.8029702970297 - type: max_ap value: 94.9210324173411 - type: max_f1 value: 89.8521162672106 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 65.92385753948724 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 33.671756975431144 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 50.677928036739004 - type: mrr value: 51.56413133435193 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.523589340819683 - type: cos_sim_spearman value: 30.187407518823235 - type: dot_pearson value: 29.039713969699015 - type: dot_spearman value: 29.114740651155508 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.211 - type: map_at_10 value: 1.6199999999999999 - type: map_at_100 value: 8.658000000000001 - type: map_at_1000 value: 21.538 - type: map_at_3 value: 0.575 - type: map_at_5 value: 0.919 - type: mrr_at_1 value: 78 - type: mrr_at_10 value: 86.18599999999999 - type: mrr_at_100 value: 86.18599999999999 - type: mrr_at_1000 value: 86.18599999999999 - type: mrr_at_3 value: 85 - type: mrr_at_5 value: 85.9 - type: ndcg_at_1 value: 74 - type: ndcg_at_10 value: 66.542 - type: ndcg_at_100 value: 50.163999999999994 - type: ndcg_at_1000 value: 45.696999999999996 - type: ndcg_at_3 value: 71.531 - type: ndcg_at_5 value: 70.45 - type: precision_at_1 value: 78 - type: precision_at_10 value: 69.39999999999999 - type: precision_at_100 value: 51.06 - type: precision_at_1000 value: 20.022000000000002 - type: precision_at_3 value: 76 - type: precision_at_5 value: 74.8 - type: recall_at_1 value: 0.211 - type: recall_at_10 value: 1.813 - type: recall_at_100 value: 12.098 - type: recall_at_1000 value: 42.618 - type: recall_at_3 value: 0.603 - type: recall_at_5 value: 0.987 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.2079999999999997 - type: map_at_10 value: 7.777000000000001 - type: map_at_100 value: 12.825000000000001 - type: map_at_1000 value: 14.196 - type: map_at_3 value: 4.285 - type: map_at_5 value: 6.177 - type: mrr_at_1 value: 30.612000000000002 - type: mrr_at_10 value: 42.635 - type: mrr_at_100 value: 43.955 - type: mrr_at_1000 value: 43.955 - type: mrr_at_3 value: 38.435 - type: mrr_at_5 value: 41.088 - type: ndcg_at_1 value: 28.571 - type: ndcg_at_10 value: 20.666999999999998 - type: ndcg_at_100 value: 31.840000000000003 - type: ndcg_at_1000 value: 43.191 - type: ndcg_at_3 value: 23.45 - type: ndcg_at_5 value: 22.994 - type: precision_at_1 value: 30.612000000000002 - type: precision_at_10 value: 17.959 - type: precision_at_100 value: 6.755 - type: precision_at_1000 value: 1.4200000000000002 - type: precision_at_3 value: 23.810000000000002 - type: precision_at_5 value: 23.673 - type: recall_at_1 value: 2.2079999999999997 - type: recall_at_10 value: 13.144 - type: recall_at_100 value: 42.491 - type: recall_at_1000 value: 77.04299999999999 - type: recall_at_3 value: 5.3469999999999995 - type: recall_at_5 value: 9.139 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.9044 - type: ap value: 14.625783489340755 - type: f1 value: 54.814936562590546 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 60.94227504244483 - type: f1 value: 61.22516038508854 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 49.602409155145864 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.94641473445789 - type: cos_sim_ap value: 76.91572747061197 - type: cos_sim_f1 value: 70.14348097317529 - type: cos_sim_precision value: 66.53254437869822 - type: cos_sim_recall value: 74.1688654353562 - type: dot_accuracy value: 84.80061989628658 - type: dot_ap value: 70.7952548895177 - type: dot_f1 value: 65.44780728844965 - type: dot_precision value: 61.53310104529617 - type: dot_recall value: 69.89445910290237 - type: euclidean_accuracy value: 86.94641473445789 - type: euclidean_ap value: 76.80774009393652 - type: euclidean_f1 value: 70.30522503879979 - type: euclidean_precision value: 68.94977168949772 - type: euclidean_recall value: 71.71503957783642 - type: manhattan_accuracy value: 86.8629671574179 - type: manhattan_ap value: 76.76518632600317 - type: manhattan_f1 value: 70.16056518946692 - type: manhattan_precision value: 68.360450563204 - type: manhattan_recall value: 72.0580474934037 - type: max_accuracy value: 86.94641473445789 - type: max_ap value: 76.91572747061197 - type: max_f1 value: 70.30522503879979 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.10428066907285 - type: cos_sim_ap value: 86.25114759921435 - type: cos_sim_f1 value: 78.37857884586856 - type: cos_sim_precision value: 75.60818546078993 - type: cos_sim_recall value: 81.35971666153372 - type: dot_accuracy value: 87.41995575736406 - type: dot_ap value: 81.51838010086782 - type: dot_f1 value: 74.77398015435503 - type: dot_precision value: 71.53002390662354 - type: dot_recall value: 78.32614721281182 - type: euclidean_accuracy value: 89.12368533395428 - type: euclidean_ap value: 86.33456799874504 - type: euclidean_f1 value: 78.45496750232127 - type: euclidean_precision value: 75.78388462366364 - type: euclidean_recall value: 81.32121958731136 - type: manhattan_accuracy value: 89.10622113556099 - type: manhattan_ap value: 86.31215061745333 - type: manhattan_f1 value: 78.40684906011539 - type: manhattan_precision value: 75.89536643366722 - type: manhattan_recall value: 81.09023714197721 - type: max_accuracy value: 89.12368533395428 - type: max_ap value: 86.33456799874504 - type: max_f1 value: 78.45496750232127 language: - en license: mit --- # E5-large-v2 [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 24 layers and the embedding size is 1024. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ". # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."] tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-large-v2') model = AutoModel.from_pretrained('intfloat/e5-large-v2') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # (Optionally) normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Training Details Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations This model only works for English texts. Long texts will be truncated to at most 512 tokens.
joncrain/test
joncrain
2023-06-28T20:18:56Z
0
0
null
[ "dataset:fka/awesome-chatgpt-prompts", "region:us" ]
null
2023-06-28T20:12:23Z
--- datasets: - fka/awesome-chatgpt-prompts ---
Ocelotr/speecht5_tts-sil
Ocelotr
2023-06-28T20:18:24Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "ara", "generated_from_trainer", "ar", "dataset:SDA_CLEAN_NAJDI", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2023-06-26T16:45:53Z
--- language: - ar license: mit tags: - ara - generated_from_trainer datasets: - SDA_CLEAN_NAJDI model-index: - name: SpeechT5 TTS results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5 TTS This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the SDA dataset. It achieves the following results on the evaluation set: - Loss: 0.4853 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 40000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.5703 | 1.49 | 1000 | 0.5289 | | 0.541 | 2.98 | 2000 | 0.5131 | | 0.5487 | 4.46 | 3000 | 0.5059 | | 0.5232 | 5.95 | 4000 | 0.5011 | | 0.5295 | 7.44 | 5000 | 0.4979 | | 0.5257 | 8.93 | 6000 | 0.4970 | | 0.5091 | 10.42 | 7000 | 0.4905 | | 0.5141 | 11.9 | 8000 | 0.4893 | | 0.5033 | 13.39 | 9000 | 0.4865 | | 0.507 | 14.88 | 10000 | 0.4850 | | 0.502 | 16.37 | 11000 | 0.4830 | | 0.497 | 17.86 | 12000 | 0.4823 | | 0.4974 | 19.35 | 13000 | 0.4801 | | 0.4993 | 20.83 | 14000 | 0.4794 | | 0.496 | 22.32 | 15000 | 0.4814 | | 0.4845 | 23.81 | 16000 | 0.4780 | | 0.4977 | 25.3 | 17000 | 0.4775 | | 0.4888 | 26.79 | 18000 | 0.4780 | | 0.4773 | 28.27 | 19000 | 0.4792 | | 0.4914 | 29.76 | 20000 | 0.4817 | | 0.4864 | 31.25 | 21000 | 0.4775 | | 0.486 | 32.74 | 22000 | 0.4773 | | 0.4884 | 34.23 | 23000 | 0.4835 | | 0.4856 | 35.71 | 24000 | 0.4788 | | 0.4814 | 37.2 | 25000 | 0.4811 | | 0.4831 | 38.69 | 26000 | 0.4814 | | 0.4732 | 40.18 | 27000 | 0.4816 | | 0.4846 | 41.67 | 28000 | 0.4812 | | 0.4731 | 43.15 | 29000 | 0.4843 | | 0.4772 | 44.64 | 30000 | 0.4830 | | 0.4793 | 46.13 | 31000 | 0.4834 | | 0.4736 | 47.62 | 32000 | 0.4834 | | 0.4798 | 49.11 | 33000 | 0.4826 | | 0.4744 | 50.6 | 34000 | 0.4841 | | 0.4784 | 52.08 | 35000 | 0.4844 | | 0.4743 | 53.57 | 36000 | 0.4851 | | 0.4779 | 55.06 | 37000 | 0.4854 | | 0.4719 | 56.55 | 38000 | 0.4854 | | 0.4825 | 58.04 | 39000 | 0.4856 | | 0.4805 | 59.52 | 40000 | 0.4853 | ### Framework versions - Transformers 4.30.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.13.0 - Tokenizers 0.13.3
Exnactus/lunarlander-v2
Exnactus
2023-06-28T20:15:21Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-28T20:14:59Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 248.55 +/- 22.78 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
mmirmahdi/Reinforce-CartPole-v1
mmirmahdi
2023-06-28T20:14:00Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-06-28T20:13:51Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
ahishamm/vit-huge-isic-patch-14
ahishamm
2023-06-28T20:05:46Z
198
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-06-28T19:59:05Z
--- license: apache-2.0 tags: - image-classification - generated_from_trainer metrics: - accuracy - recall - f1 - precision model-index: - name: vit-huge-isic-patch-14 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-huge-isic-patch-14 This model is a fine-tuned version of [google/vit-huge-patch14-224-in21k](https://huggingface.co/google/vit-huge-patch14-224-in21k) on the ahishamm/isic_db dataset. It achieves the following results on the evaluation set: - Loss: 0.6077 - Accuracy: 0.7917 - Recall: 0.7917 - F1: 0.7917 - Precision: 0.7917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
globuslabs/ScholarBERT_10
globuslabs
2023-06-28T20:01:02Z
119
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "science", "multi-displinary", "en", "arxiv:2205.11342", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-22T22:22:02Z
--- language: en tags: - science - multi-displinary license: apache-2.0 --- # ScholarBERT_10 Model This is the **ScholarBERT_10** variant of the ScholarBERT model family. The model is pretrained on a large collection of scientific research articles (**22.1B tokens**). This is a **cased** (case-sensitive) model. The tokenizer will not convert all inputs to lower-case by default. The model is based on the same architecture as [BERT-large](https://huggingface.co/bert-large-cased) and has a total of 340M parameters. # Model Architecture | Hyperparameter | Value | |-----------------|:-------:| | Layers | 24 | | Hidden Size | 1024 | | Attention Heads | 16 | | Total Parameters | 340M | # Training Dataset The vocab and the model are pertrained on **10% of the PRD** scientific literature dataset. The PRD dataset is provided by Public.Resource.Org, Inc. (“Public Resource”), a nonprofit organization based in California. This dataset was constructed from a corpus of journal article files, from which We successfully extracted text from 75,496,055 articles from 178,928 journals. The articles span across Arts & Humanities, Life Sciences & Biomedicine, Physical Sciences, Social Sciences, and Technology. The distribution of articles is shown below. ![corpus pie chart](https://huggingface.co/globuslabs/ScholarBERT/resolve/main/corpus_pie_chart.png) # BibTeX entry and citation info If using this model, please cite this paper: ``` @misc{hong2023diminishing, title={The Diminishing Returns of Masked Language Models to Science}, author={Zhi Hong and Aswathy Ajith and Gregory Pauloski and Eamon Duede and Kyle Chard and Ian Foster}, year={2023}, eprint={2205.11342}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
globuslabs/ScholarBERT_10_WB
globuslabs
2023-06-28T20:00:24Z
111
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "science", "multi-displinary", "en", "arxiv:2205.11342", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-22T22:30:01Z
--- language: en tags: - science - multi-displinary license: apache-2.0 --- # ScholarBERT_10_WB Model This is the **ScholarBERT_10_WB** variant of the ScholarBERT model family. The model is pretrained on a large collection of scientific research articles (**22.1B tokens**). Additionally, the pretraining data also includes the Wikipedia+BookCorpus, which are used to pretrain the [BERT-base](https://huggingface.co/bert-base-cased) and [BERT-large](https://huggingface.co/bert-large-cased) models. This is a **cased** (case-sensitive) model. The tokenizer will not convert all inputs to lower-case by default. The model is based on the same architecture as [BERT-large](https://huggingface.co/bert-large-cased) and has a total of 340M parameters. # Model Architecture | Hyperparameter | Value | |-----------------|:-------:| | Layers | 24 | | Hidden Size | 1024 | | Attention Heads | 16 | | Total Parameters | 340M | # Training Dataset The vocab and the model are pertrained on **10% of the PRD** scientific literature dataset and Wikipedia+BookCorpus. The PRD dataset is provided by Public.Resource.Org, Inc. (“Public Resource”), a nonprofit organization based in California. This dataset was constructed from a corpus of journal article files, from which We successfully extracted text from 75,496,055 articles from 178,928 journals. The articles span across Arts & Humanities, Life Sciences & Biomedicine, Physical Sciences, Social Sciences, and Technology. The distribution of articles is shown below. ![corpus pie chart](https://huggingface.co/globuslabs/ScholarBERT/resolve/main/corpus_pie_chart.png) # BibTeX entry and citation info If using this model, please cite this paper: ``` @misc{hong2023diminishing, title={The Diminishing Returns of Masked Language Models to Science}, author={Zhi Hong and Aswathy Ajith and Gregory Pauloski and Eamon Duede and Kyle Chard and Ian Foster}, year={2023}, eprint={2205.11342}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
globuslabs/ScholarBERT_100_WB
globuslabs
2023-06-28T20:00:01Z
115
1
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "science", "multi-displinary", "en", "arxiv:2205.11342", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-22T22:27:22Z
--- language: en tags: - science - multi-displinary license: apache-2.0 --- # ScholarBERT_100_WB Model This is the **ScholarBERT_100_WB** variant of the ScholarBERT model family. The model is pretrained on a large collection of scientific research articles (**221B tokens**). Additionally, the pretraining data also includes the Wikipedia+BookCorpus, which are used to pretrain the [BERT-base](https://huggingface.co/bert-base-cased) and [BERT-large](https://huggingface.co/bert-large-cased) models. This is a **cased** (case-sensitive) model. The tokenizer will not convert all inputs to lower-case by default. The model is based on the same architecture as [BERT-large](https://huggingface.co/bert-large-cased) and has a total of 340M parameters. # Model Architecture | Hyperparameter | Value | |-----------------|:-------:| | Layers | 24 | | Hidden Size | 1024 | | Attention Heads | 16 | | Total Parameters | 340M | # Training Dataset The vocab and the model are pertrained on **100% of the PRD** scientific literature dataset and the Wikipedia+BookCorpus. The PRD dataset is provided by Public.Resource.Org, Inc. (“Public Resource”), a nonprofit organization based in California. This dataset was constructed from a corpus of journal article files, from which We successfully extracted text from 75,496,055 articles from 178,928 journals. The articles span across Arts & Humanities, Life Sciences & Biomedicine, Physical Sciences, Social Sciences, and Technology. The distribution of articles is shown below. ![corpus pie chart](https://huggingface.co/globuslabs/ScholarBERT/resolve/main/corpus_pie_chart.png) # BibTeX entry and citation info If using this model, please cite this paper: ``` @misc{hong2023diminishing, title={The Diminishing Returns of Masked Language Models to Science}, author={Zhi Hong and Aswathy Ajith and Gregory Pauloski and Eamon Duede and Kyle Chard and Ian Foster}, year={2023}, eprint={2205.11342}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
globuslabs/ScholarBERT
globuslabs
2023-06-28T19:59:26Z
115
9
transformers
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "science", "multi-displinary", "en", "arxiv:2205.11342", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-22T22:15:16Z
--- language: en tags: - science - multi-displinary license: apache-2.0 --- # ScholarBERT_100 Model This is the **ScholarBERT_100** variant of the ScholarBERT model family. The model is pretrained on a large collection of scientific research articles (**221B tokens**). This is a **cased** (case-sensitive) model. The tokenizer will not convert all inputs to lower-case by default. The model is based on the same architecture as [BERT-large](https://huggingface.co/bert-large-cased) and has a total of 340M parameters. # Model Architecture | Hyperparameter | Value | |-----------------|:-------:| | Layers | 24 | | Hidden Size | 1024 | | Attention Heads | 16 | | Total Parameters | 340M | # Training Dataset The vocab and the model are pertrained on **100% of the PRD** scientific literature dataset. The PRD dataset is provided by Public.Resource.Org, Inc. (“Public Resource”), a nonprofit organization based in California. This dataset was constructed from a corpus of journal article files, from which We successfully extracted text from 75,496,055 articles from 178,928 journals. The articles span across Arts & Humanities, Life Sciences & Biomedicine, Physical Sciences, Social Sciences, and Technology. The distribution of articles is shown below. ![corpus pie chart](corpus_pie_chart.png) # BibTeX entry and citation info If using this model, please cite this paper: ``` @misc{hong2023diminishing, title={The Diminishing Returns of Masked Language Models to Science}, author={Zhi Hong and Aswathy Ajith and Gregory Pauloski and Eamon Duede and Kyle Chard and Ian Foster}, year={2023}, eprint={2205.11342}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
YakovElm/MariaDB_5_BERT_Over_Sampling
YakovElm
2023-06-28T19:55:08Z
61
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-28T19:54:31Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MariaDB_5_BERT_Over_Sampling results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MariaDB_5_BERT_Over_Sampling This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0696 - Train Accuracy: 0.9771 - Validation Loss: 0.4323 - Validation Accuracy: 0.9221 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4972 | 0.7607 | 0.2633 | 0.9121 | 0 | | 0.1913 | 0.9294 | 0.3861 | 0.9121 | 1 | | 0.0696 | 0.9771 | 0.4323 | 0.9221 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
alitair/LunarLander-v2
alitair
2023-06-28T19:53:51Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-28T19:53:34Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: ppo results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 288.83 +/- 20.38 name: mean_reward verified: false --- # **ppo** Agent playing **LunarLander-v2** This is a trained model of a **ppo** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ahishamm/vit-large-isic-patch-16
ahishamm
2023-06-28T19:52:45Z
191
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-06-28T19:47:01Z
--- license: apache-2.0 tags: - image-classification - generated_from_trainer metrics: - accuracy - recall - f1 - precision model-index: - name: vit-large-isic-patch-16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-isic-patch-16 This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the ahishamm/isic_db dataset. It achieves the following results on the evaluation set: - Loss: 0.7317 - Accuracy: 0.75 - Recall: 0.75 - F1: 0.75 - Precision: 0.75 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
shirsh10mall/Helsinki-shirsh-finetuned-translation-english-to-hindi
shirsh10mall
2023-06-28T19:52:16Z
63
0
transformers
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-06-27T03:08:56Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: shirsh10mall/Helsinki-shirsh-finetuned-translation-english-to-hindi results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # shirsh10mall/Helsinki-shirsh-finetuned-translation-english-to-hindi This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-hi](https://huggingface.co/Helsinki-NLP/opus-mt-en-hi) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.5746 - Validation Loss: 4.4287 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 10, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.5743 | 4.4287 | 0 | | 4.5751 | 4.4287 | 1 | | 4.5730 | 4.4287 | 2 | | 4.5752 | 4.4287 | 3 | | 4.5757 | 4.4287 | 4 | | 4.5753 | 4.4287 | 5 | | 4.5729 | 4.4287 | 6 | | 4.5759 | 4.4287 | 7 | | 4.5749 | 4.4287 | 8 | | 4.5746 | 4.4287 | 9 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.1.0 - Tokenizers 0.13.3
Jumartineze/xlm-roberta-base-finetuned-MeIA-AnalisisDeSentimientos
Jumartineze
2023-06-28T19:45:04Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-28T17:15:13Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-MeIA-AnalisisDeSentimientos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-MeIA-AnalisisDeSentimientos This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9935 - F1: 0.5903 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.005 | 1.0 | 766 | 0.9719 | 0.5623 | | 0.8756 | 2.0 | 1532 | 0.9842 | 0.5655 | | 0.753 | 3.0 | 2298 | 0.9935 | 0.5903 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
S3S3/ppo-Pyramids_Training1
S3S3
2023-06-28T19:42:01Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-06-28T19:41:53Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: S3S3/ppo-Pyramids_Training1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
amittian/setfit_ds_version_0_0_3
amittian
2023-06-28T19:41:36Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-06-28T19:41:15Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # amittian/setfit_ds_version_0_0_3 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("amittian/setfit_ds_version_0_0_3") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
PickleYard/PerfectWorld
PickleYard
2023-06-28T19:38:43Z
7
0
diffusers
[ "diffusers", "ai-art", "style-transfer", "animation", "deep-learning", "text-to-image", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-28T18:41:57Z
--- license: other language: - en library_name: diffusers pipeline_tag: text-to-image tags: - ai-art - style-transfer - animation - deep-learning - text-to-image ---
bk6000/q-FrozenLake-v1-4x4-noSlippery
bk6000
2023-06-28T19:27:47Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-28T19:27:45Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="bk6000/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
YakovElm/Jira_20_BERT_Over_Sampling
YakovElm
2023-06-28T19:05:58Z
61
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-28T19:05:01Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira_20_BERT_Over_Sampling results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira_20_BERT_Over_Sampling This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0521 - Train Accuracy: 0.9856 - Validation Loss: 0.4925 - Validation Accuracy: 0.8612 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4491 | 0.7989 | 0.6370 | 0.6656 | 0 | | 0.1533 | 0.9514 | 0.3511 | 0.9211 | 1 | | 0.0521 | 0.9856 | 0.4925 | 0.8612 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
S3S3/ppo-SnowballTarget
S3S3
2023-06-28T19:02:06Z
9
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-06-28T18:39:43Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: S3S3/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
cleanrl/Hopper-v4-ddpg_continuous_action-seed1
cleanrl
2023-06-28T18:59:37Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Hopper-v4", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-28T18:59:25Z
--- tags: - Hopper-v4 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Hopper-v4 type: Hopper-v4 metrics: - type: mean_reward value: 1675.53 +/- 1038.51 name: mean_reward verified: false --- # (CleanRL) **DDPG** Agent Playing **Hopper-v4** This is a trained model of a DDPG agent playing Hopper-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[ddpg_continuous_action]" python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action --env-id Hopper-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Hopper-v4-ddpg_continuous_action-seed1/raw/main/ddpg_continuous_action.py curl -OL https://huggingface.co/cleanrl/Hopper-v4-ddpg_continuous_action-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Hopper-v4-ddpg_continuous_action-seed1/raw/main/poetry.lock poetry install --all-extras python ddpg_continuous_action.py --track --capture-video --save-model --hf-entity cleanrl --upload-model --env-id Hopper-v4 --seed 1 ``` # Hyperparameters ```python {'batch_size': 256, 'buffer_size': 1000000, 'capture_video': True, 'cuda': True, 'env_id': 'Hopper-v4', 'exp_name': 'ddpg_continuous_action', 'exploration_noise': 0.1, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learning_rate': 0.0003, 'learning_starts': 25000.0, 'noise_clip': 0.5, 'policy_frequency': 2, 'save_model': True, 'seed': 1, 'tau': 0.005, 'torch_deterministic': True, 'total_timesteps': 1000000, 'track': True, 'upload_model': True, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
asapp/sew-tiny-100k-ft-ls100h
asapp
2023-06-28T18:56:29Z
671
0
transformers
[ "transformers", "pytorch", "safetensors", "sew", "automatic-speech-recognition", "audio", "speech", "hf-asr-leaderboard", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: en datasets: - librispeech_asr tags: - audio - speech - automatic-speech-recognition - hf-asr-leaderboard license: apache-2.0 widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac model-index: - name: sew-tiny-100k-ft-ls100h results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 10.61 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 23.74 --- # SEW-tiny [SEW by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, SEWForCTC from datasets import load_dataset import soundfile as sf import torch # load the model and preprocessor processor = Wav2Vec2Processor.from_pretrained("asapp/sew-tiny-100k-ft-ls100h") model = SEWForCTC.from_pretrained("asapp/sew-tiny-100k-ft-ls100h") # load the dummy dataset with speech samples ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # preprocess input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ``` ## Evaluation This code snippet shows how to evaluate **asapp/sew-tiny-100k-ft-ls100h** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import SEWForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = SEWForCTC.from_pretrained("asapp/sew-tiny-100k-ft-ls100h").to("cuda") processor = Wav2Vec2Processor.from_pretrained("asapp/sew-tiny-100k-ft-ls100h") def map_to_pred(batch): input_values = processor(batch["audio"][0]["array"], sampling_rate=16000, return_tensors="pt", padding="longest").input_values with torch.no_grad(): logits = model(input_values.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | | --- | --- | | 10.61 | 23.74 |
cleanrl/Humanoid-v4-ddpg_continuous_action_jax-seed1
cleanrl
2023-06-28T18:54:13Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Humanoid-v4", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-28T18:53:59Z
--- tags: - Humanoid-v4 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Humanoid-v4 type: Humanoid-v4 metrics: - type: mean_reward value: 1303.07 +/- 456.80 name: mean_reward verified: false --- # (CleanRL) **DDPG** Agent Playing **Humanoid-v4** This is a trained model of a DDPG agent playing Humanoid-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action_jax.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[ddpg_continuous_action_jax]" python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action_jax --env-id Humanoid-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Humanoid-v4-ddpg_continuous_action_jax-seed1/raw/main/ddpg_continuous_action_jax.py curl -OL https://huggingface.co/cleanrl/Humanoid-v4-ddpg_continuous_action_jax-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Humanoid-v4-ddpg_continuous_action_jax-seed1/raw/main/poetry.lock poetry install --all-extras python ddpg_continuous_action_jax.py --track --capture-video --save-model --hf-entity cleanrl --upload-mode --env-id Humanoid-v4 --seed 1 ``` # Hyperparameters ```python {'batch_size': 256, 'buffer_size': 1000000, 'capture_video': True, 'env_id': 'Humanoid-v4', 'exp_name': 'ddpg_continuous_action_jax', 'exploration_noise': 0.1, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learning_rate': 0.0003, 'learning_starts': 25000.0, 'noise_clip': 0.5, 'policy_frequency': 2, 'save_model': True, 'seed': 1, 'tau': 0.005, 'total_timesteps': 1000000, 'track': True, 'upload_model': True, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
codervent981/admin-dashboard-template
codervent981
2023-06-28T18:39:33Z
0
0
null
[ "region:us" ]
null
2023-06-28T18:39:01Z
Codervent Admin Dashboard Template is a versatile and user-friendly web application interface designed specifically for administrators. With its clean and modern design, it offers a comprehensive set of features and tools to manage and monitor various aspects of an application or website. The template provides a responsive layout, making it accessible on different devices. It includes various widgets, charts, tables, and forms, allowing administrators to effectively analyze data, manage user accounts, track performance metrics, and perform administrative tasks efficiently. Codervent Admin Dashboard Template is a reliable solution for streamlining administrative workflows and enhancing productivity. Read More:- https://codervent.com/
vlkn/falcon_finetuned2
vlkn
2023-06-28T18:38:23Z
0
0
null
[ "tensorboard", "generated_from_trainer", "region:us" ]
null
2023-06-28T17:27:34Z
--- tags: - generated_from_trainer model-index: - name: falcon_finetuned2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon_finetuned2 This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 300 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
PickleYard/Elysian-Fields
PickleYard
2023-06-28T18:22:47Z
13
0
diffusers
[ "diffusers", "artificial-intelligence", "ai-art", "anime-style", "content-creation", "animation", "dreamshaper", "text-to-image", "en", "dataset:unknown", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-28T03:54:18Z
--- language: en library: diffusers tags: - artificial-intelligence - ai-art - anime-style - content-creation - animation - dreamshaper - text-to-image license: other datasets: - unknown library_name: diffusers --- ## Model Description "Elysian Fields" is a model based on the original Dreamshaper model. The primary objective of this model is to generate artificial intelligence (AI)-driven art and animations that mimic the style of traditional paintings. It excels in creating lifelike portraits, intricate backgrounds, and anime-style characters. Originally designed for creating unique portraits that transcend the boundaries of computer graphics and heavily-filtered photographs, this model has evolved to become an integral part of content creation and independent animation production. Leveraging the power of LoRA networks, it also supports the generation of anime-style images. This model is hosted on Hugging Face Model Hub, and can be used for a wide range of creative applications, from content creation to animation and beyond.
dalonsoherrera/distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
dalonsoherrera
2023-06-28T17:58:45Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-28T11:25:37Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 model-index: - name: distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0202 - F1: 0.5469 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.0674 | 1.0 | 766 | 1.0666 | 0.5077 | | 0.977 | 2.0 | 1532 | 1.0202 | 0.5469 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
asdc/Bio-RoBERTime
asdc
2023-06-28T17:56:40Z
106
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "LABEL-0 = NONE", "LABEL-1 = B-DATE", "LABEL-2 = I-DATE", "LABEL-3 = B-TIME", "LABEL-4 = I-TIME", "LABEL-5 = B-DURATION", "LABEL-6 = I-DURATION", "LABEL-7 = B-SET", "LABEL-8 = I-SET", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-22T14:44:54Z
--- license: apache-2.0 widget: - text: "Ayer dormí la siesta durante 3 horas" - text: "Recuerda tu cita con el médico el lunes a las 8 de la tarde" - text: "Recuerda tomar la medicación cada noche" - text: "Last day I slept for three hours" - text: "Remember your doctor´s appointment on Monday at 6am" tags: - LABEL-0 = NONE - LABEL-1 = B-DATE - LABEL-2 = I-DATE - LABEL-3 = B-TIME - LABEL-4 = I-TIME - LABEL-5 = B-DURATION - LABEL-6 = I-DURATION - LABEL-7 = B-SET - LABEL-8 = I-SET metrics: - precision - recall - f1 - accuracy model-index: - name: Bio-RoBERTime results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Bio-RoBERTime This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the [E3C](https://github.com/hltfbk/E3C-Corpus) and Timebank datasets. It achieves the following results on the [E3C corpus](https://github.com/hltfbk/E3C-Corpus) test set following the TempEval-3 evaluation metrics: | E3C | Strict | Relaxed | type | |------------|:-------:|--------:|-------:| | RoBERTime | **0.7606** | **0.9108** | **0.8357** | | Heideltime | 0.5945 | 0.7558 | 0.6083 | | Annotador | 0.6006 | 0.7347 | 0.5598 | RoBERTime is a token classification model, it labels each token into one of the 9 posible labels. We follow the BIO label schema, so each class has two posible values: Begining or Interior. For more Details on the implementation and evaluation refer to the paper: ["RoBERTime: A novel model for the detection of temporal expressions in Spanish" ](https://rua.ua.es/dspace/handle/10045/133235) ## Model description - **Developed by**: Alejandro Sánchez de Castro, Juan Martínez Romo, Lourdes Araujo This model is the result of the paper "RoBERTime: A novel model for the detection of temporal expressions in Spanish" - **Cite as**: @article{sanchez2023robertime, title={RoBERTime: A novel model for the detection of temporal expressions in Spanish}, author={Sánchez-de-Castro-Fernández, Alejandro and Araujo Serna, Lourdes and Martínez Romo, Juan}, year={2023}, publisher={Sociedad Española para el Procesamiento del Lenguaje Natural} } ## Intended uses & limitations This model is prepared for the detection of temporal expressions extension in Spanish. It may work in other languages due to RoBERTa multilingual capabilities. This model does not normalize the expression value. This is considered to be a separate task. ## Training and evaluation data This model has been trained on the Spanish Timebank corpus and E3C corpus ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 72 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 24 ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
Albertf/Fertary
Albertf
2023-06-28T17:55:39Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2023-06-28T17:55:39Z
--- license: bigscience-openrail-m ---
YakovElm/Jira_10_BERT_Over_Sampling
YakovElm
2023-06-28T17:50:33Z
61
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-28T17:49:47Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira_10_BERT_Over_Sampling results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira_10_BERT_Over_Sampling This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0915 - Train Accuracy: 0.9711 - Validation Loss: 1.1919 - Validation Accuracy: 0.6909 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5305 | 0.7446 | 0.6068 | 0.6751 | 0 | | 0.2652 | 0.8965 | 0.6721 | 0.6972 | 1 | | 0.0915 | 0.9711 | 1.1919 | 0.6909 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
emiliam/bert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
emiliam
2023-06-28T17:48:49Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-28T14:44:34Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0999 - Accuracy: 0.5436 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 14 - eval_batch_size: 14 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9382 | 1.0 | 438 | 1.1474 | 0.5036 | | 0.8066 | 2.0 | 876 | 1.0999 | 0.5436 | | 0.6462 | 3.0 | 1314 | 1.2079 | 0.5413 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
cleanrl/Walker2d-v4-ddpg_continuous_action_jax-seed1
cleanrl
2023-06-28T17:41:03Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Walker2d-v4", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-28T17:39:39Z
--- tags: - Walker2d-v4 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Walker2d-v4 type: Walker2d-v4 metrics: - type: mean_reward value: 1468.25 +/- 661.70 name: mean_reward verified: false --- # (CleanRL) **DDPG** Agent Playing **Walker2d-v4** This is a trained model of a DDPG agent playing Walker2d-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action_jax.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[ddpg_continuous_action_jax]" python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action_jax --env-id Walker2d-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Walker2d-v4-ddpg_continuous_action_jax-seed1/raw/main/ddpg_continuous_action_jax.py curl -OL https://huggingface.co/cleanrl/Walker2d-v4-ddpg_continuous_action_jax-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Walker2d-v4-ddpg_continuous_action_jax-seed1/raw/main/poetry.lock poetry install --all-extras python ddpg_continuous_action_jax.py --track --capture-video --save-model --hf-entity cleanrl --upload-mode --env-id Walker2d-v4 --seed 1 ``` # Hyperparameters ```python {'batch_size': 256, 'buffer_size': 1000000, 'capture_video': True, 'env_id': 'Walker2d-v4', 'exp_name': 'ddpg_continuous_action_jax', 'exploration_noise': 0.1, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learning_rate': 0.0003, 'learning_starts': 25000.0, 'noise_clip': 0.5, 'policy_frequency': 2, 'save_model': True, 'seed': 1, 'tau': 0.005, 'total_timesteps': 1000000, 'track': True, 'upload_model': True, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
srsfdghjkzht/text_to_nude
srsfdghjkzht
2023-06-28T17:31:35Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2023-06-28T17:31:35Z
--- license: bigscience-openrail-m ---
ndktraining/distilroberta-base-finetuned-wikitext2
ndktraining
2023-06-28T17:29:03Z
122
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-06-23T04:25:16Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8349 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0852 | 1.0 | 2406 | 1.9234 | | 1.992 | 2.0 | 4812 | 1.8828 | | 1.9603 | 3.0 | 7218 | 1.8223 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
bofenghuang/whisper-large-v2-cv11-french-ct2
bofenghuang
2023-06-28T17:21:01Z
5
0
ctranslate2
[ "ctranslate2", "automatic-speech-recognition", "whisper-event", "fr", "license:apache-2.0", "region:us" ]
automatic-speech-recognition
2023-06-28T16:00:11Z
--- license: apache-2.0 language: fr thumbnail: null library_name: ctranslate2 tags: - automatic-speech-recognition - whisper-event --- <style> img { display: inline; } </style> ![Model architecture](https://img.shields.io/badge/Model_Architecture-seq2seq-lightgrey) ![Model size](https://img.shields.io/badge/Params-1550M-lightgrey) ![Language](https://img.shields.io/badge/Language-French-lightgrey) # Fine-tuned French whisper-large-v2 model for CTranslate2 This repository contains the [bofenghuang/whisper-large-v2-cv11-french](https://huggingface.co/bofenghuang/whisper-large-v2-cv11-french) model converted to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) format. ## Usage ```python from faster_whisper import WhisperModel from huggingface_hub import snapshot_download downloaded_model_path = snapshot_download(repo_id="bofenghuang/whisper-large-v2-cv11-french-ct2") # Run on GPU with FP16 model = WhisperModel(downloaded_model_path, device="cuda", compute_type="float16") # or run on GPU with INT8 # model = WhisperModel(downloaded_model_path, device="cuda", compute_type="int8_float16") # or run on CPU with INT8 # model = WhisperModel(downloaded_model_path, device="cpu", compute_type="int8") segments, info = model.transcribe("./sample.wav", beam_size=1) print("Detected language '%s' with probability %f" % (info.language, info.language_probability)) for segment in segments: print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text)) ``` You can also use the following Google Colab Notebook to infer the converted Whisper models. <a href="https://colab.research.google.com/#fileId=https%3A//huggingface.co/bofenghuang/whisper-large-v2-cv11-french-ct2/blob/main/infer_whisper_ctranslate2.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Conversion The original model was converted with the following command: ```bash ct2-transformers-converter --model bofenghuang/whisper-large-v2-cv11-french --output_dir whisper-large-v2-cv11-french-ct2 --quantization float16 ```
cleanrl/HalfCheetah-v4-ddpg_continuous_action_jax-seed1
cleanrl
2023-06-28T17:20:06Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "HalfCheetah-v4", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-28T17:18:02Z
--- tags: - HalfCheetah-v4 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: HalfCheetah-v4 type: HalfCheetah-v4 metrics: - type: mean_reward value: 10913.84 +/- 141.94 name: mean_reward verified: false --- # (CleanRL) **DDPG** Agent Playing **HalfCheetah-v4** This is a trained model of a DDPG agent playing HalfCheetah-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action_jax.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[ddpg_continuous_action_jax]" python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action_jax --env-id HalfCheetah-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/HalfCheetah-v4-ddpg_continuous_action_jax-seed1/raw/main/ddpg_continuous_action_jax.py curl -OL https://huggingface.co/cleanrl/HalfCheetah-v4-ddpg_continuous_action_jax-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/HalfCheetah-v4-ddpg_continuous_action_jax-seed1/raw/main/poetry.lock poetry install --all-extras python ddpg_continuous_action_jax.py --track --capture-video --save-model --hf-entity cleanrl --upload-mode --env-id HalfCheetah-v4 --seed 1 ``` # Hyperparameters ```python {'batch_size': 256, 'buffer_size': 1000000, 'capture_video': True, 'env_id': 'HalfCheetah-v4', 'exp_name': 'ddpg_continuous_action_jax', 'exploration_noise': 0.1, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learning_rate': 0.0003, 'learning_starts': 25000.0, 'noise_clip': 0.5, 'policy_frequency': 2, 'save_model': True, 'seed': 1, 'tau': 0.005, 'total_timesteps': 1000000, 'track': True, 'upload_model': True, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
ag159/taxi-v3-q-learning
ag159
2023-06-28T17:20:00Z
0
0
null
[ "FrozenLake-v1", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-28T17:19:59Z
--- tags: - FrozenLake-v1 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi-v3-q-learning results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1 type: FrozenLake-v1 metrics: - type: mean_reward value: 7.94 +/- 2.60 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="ag159/taxi-v3-q-learning", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ag159/q-FrozenLake-v1-4x4-noSlippery
ag159
2023-06-28T17:16:22Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-28T17:16:20Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="ag159/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
YakovElm/Jira_5_BERT_Over_Sampling
YakovElm
2023-06-28T17:15:19Z
54
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-28T17:14:42Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jira_5_BERT_Over_Sampling results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jira_5_BERT_Over_Sampling This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1422 - Train Accuracy: 0.9521 - Validation Loss: 0.9298 - Validation Accuracy: 0.6404 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5895 | 0.6781 | 0.7268 | 0.5268 | 0 | | 0.3571 | 0.8377 | 0.8239 | 0.6309 | 1 | | 0.1422 | 0.9521 | 0.9298 | 0.6404 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
deepghs/anime_ch_horn
deepghs
2023-06-28T17:13:49Z
0
0
null
[ "onnx", "art", "image-classification", "dataset:deepghs/anime_ch_horn", "license:mit", "region:us" ]
image-classification
2023-06-17T02:38:51Z
--- license: mit datasets: - deepghs/anime_ch_horn metrics: - accuracy - f1 pipeline_tag: image-classification tags: - art --- | Name | FLOPS | Params | Accuracy | AUC | Confusion | Labels | |:-------------------:|:-------:|:--------:|:----------:|:------:|:----------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------:| | caformer_s36_raw | 22.10G | 37.22M | 88.32% | 0.9788 | [confusion](https://huggingface.co/deepghs/anime_ch_horn/blob/main/caformer_s36_raw/plot_confusion.png) | `cow`, `demon`, `dragon`, `oni`, `sheep`, `none` | | caformer_s36_v0 | 22.10G | 37.22M | 86.88% | 0.9789 | [confusion](https://huggingface.co/deepghs/anime_ch_horn/blob/main/caformer_s36_v0/plot_confusion.png) | `cow`, `deer`, `demon`, `dragon`, `oni`, `sheep`, `none` | | mobilenetv3_v0_dist | 0.63G | 4.18M | 81.86% | 0.9657 | [confusion](https://huggingface.co/deepghs/anime_ch_horn/blob/main/mobilenetv3_v0_dist/plot_confusion.png) | `cow`, `deer`, `demon`, `dragon`, `oni`, `sheep`, `none` |
Multi-Domain-Expert-Learning/scorpius_16b
Multi-Domain-Expert-Learning
2023-06-28T17:03:58Z
9
1
transformers
[ "transformers", "pytorch", "gpt_bigcode", "text-generation", "license:bigscience-openrail-m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-15T06:04:31Z
--- license: bigscience-openrail-m --- This model is a merge of 80% starchatplus_beta and 20% wizardcoder. It is intended as a research tool into merging and routing of experts. "multiple-py": { "pass@1": 0.36645962732919257 } * this is just using a .1 sample of the eval for test purposes * * hf-causal (pretrained=Multi-Domain-Expert-Layers/scorpius_16b,dtype=bfloat16), limit: 0.1, provide_description: False, num_fewshot: 0, batch_size: None | Task |Version| Metric | Value | |Stderr| |-------------------------------------------------|------:|-----------|------:|---|-----:| |arc_challenge | 0|acc | 0.4103|± |0.0457| | | |acc_norm | 0.4103|± |0.0457| |arc_easy | 0|acc | 0.7350|± |0.0410| | | |acc_norm | 0.6923|± |0.0429| |hellaswag | 0|acc | 0.5812|± |0.0458| | | |acc_norm | 0.7778|± |0.0386|
paust/pko-t5-large
paust
2023-06-28T17:03:42Z
751
20
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "ko", "arxiv:2105.09680", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-16T11:59:52Z
--- language: ko license: cc-by-4.0 --- # pko-t5-large [Source Code](https://github.com/paust-team/pko-t5) pko-t5 는 한국어 전용 데이터로 학습한 [t5 v1.1 모델](https://github.com/google-research/text-to-text-transfer-transformer/blob/84f8bcc14b5f2c03de51bd3587609ba8f6bbd1cd/released_checkpoints.md)입니다. 한국어를 tokenize 하기 위해서 sentencepiece 대신 OOV 가 없는 BBPE 를 사용했으며 한국어 데이터 (나무위키, 위키피디아, 모두의말뭉치 등..) 를 T5 의 span corruption task 를 사용해서 unsupervised learning 만 적용하여 학습을 진행했습니다. pko-t5 를 사용하실 때는 대상 task 에 파인튜닝하여 사용하시기 바랍니다. ## Usage transformers 의 API 를 사용하여 접근 가능합니다. tokenizer 를 사용할때는 `T5Tokenizer` 가 아니라 `T5TokenizerFast` 를 사용해주십시오. model 은 T5ForConditionalGeneration 를 그대로 활용하시면 됩니다. ### Example ```python from transformers import T5TokenizerFast, T5ForConditionalGeneration tokenizer = T5TokenizerFast.from_pretrained('paust/pko-t5-large') model = T5ForConditionalGeneration.from_pretrained('paust/pko-t5-large') input_ids = tokenizer(["qa question: 당신의 이름은 무엇인가요?"]).input_ids labels = tokenizer(["T5 입니다."]).input_ids outputs = model(input_ids=input_ids, labels=labels) print(f"loss={outputs.loss} logits={outputs.logits}") ``` ## Klue 평가 (dev) | | Model | ynat (macro F1) | sts (pearsonr/F1) | nli (acc) | ner (entity-level F1) | re (micro F1) | dp (LAS) | mrc (EM/F1) | |-----|------------------------------------------------------------------|-----------------|-------------------|-----------|-----------------------|---------------|-----------|-------------| | | Baseline | **87.30** | **93.20/86.13** | **89.50** | 86.06 | 71.06 | 87.93 | **75.26/-** | | FT | [pko-t5-small](https://huggingface.co/paust/pko-t5-small) (77M) | 86.21 | 77.99/77.01 | 69.20 | 82.60 | 66.46 | 93.15 | 43.81/46.58 | | FT | [pko-t5-base](https://huggingface.co/paust/pko-t5-base) (250M) | 87.29 | 90.25/83.43 | 79.73 | 87.80 | 67.23 | 97.28 | 61.53/64.74 | | FT | [pko-t5-large](https://huggingface.co/paust/pko-t5-large) (800M) | 87.12 | 92.05/85.24 | 84.96 | **88.18** | **75.17** | **97.60** | 68.01/71.44 | | MT | pko-t5-small | 84.54 | 68.50/72/02 | 51.16 | 74.69 | 66.11 | 80.40 | 43.60/46.28 | | MT | pko-t5-base | 86.89 | 83.96/80.30 | 72.03 | 85.27 | 66.59 | 95.05 | 61.11/63.94 | | MT | pko-t5-large | 87.57 | 91.93/86.29 | 83.63 | 87.41 | 71.34 | 96.99 | 70.70/73.72 | - FT: 싱글태스크 파인튜닝 / MT: 멀티태스크 파인튜닝 - [Baseline](https://arxiv.org/abs/2105.09680): KLUE 논문에서 소개된 dev set 에 대한 SOTA 점수 ## License [PAUST](https://paust.io)에서 만든 pko-t5는 [MIT license](https://github.com/paust-team/pko-t5/blob/main/LICENSE) 하에 공개되어 있습니다.
mr-m1chaeljprodss/catmodel
mr-m1chaeljprodss
2023-06-28T16:57:04Z
0
0
fairseq
[ "fairseq", "music", "en", "es", "dataset:OpenAssistant/oasst1", "license:openrail", "region:us" ]
null
2023-06-28T16:51:23Z
--- license: openrail language: - en - es library_name: fairseq tags: - music datasets: - OpenAssistant/oasst1 metrics: - accuracy ---
J4m35M4xw3ll/ppo_cleanrl-LunarLander-v2
J4m35M4xw3ll
2023-06-28T16:53:26Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-06-28T15:49:18Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -14.73 +/- 107.14 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 1000000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'J4m35M4xw3ll/ppo_cleanrl-LunarLander-v2' 'batch_size': 512 'minibatch_size': 128} ```
karinthommen/spontaneous-whisper-v6-3
karinthommen
2023-06-28T16:42:23Z
14
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-06-27T15:17:28Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: spontaneous-whisper-v6-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spontaneous-whisper-v6-3 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 4000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
YakovElm/IntelDAOS_20_BERT_Over_Sampling
YakovElm
2023-06-28T16:40:18Z
61
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-28T16:39:37Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS_20_BERT_Over_Sampling results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS_20_BERT_Over_Sampling This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0438 - Train Accuracy: 0.9891 - Validation Loss: 0.4778 - Validation Accuracy: 0.8979 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4479 | 0.7976 | 0.4396 | 0.8198 | 0 | | 0.0724 | 0.9813 | 0.8134 | 0.8048 | 1 | | 0.0438 | 0.9891 | 0.4778 | 0.8979 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
Dddokter/VAE
Dddokter
2023-06-28T16:38:55Z
0
1
null
[ "region:us" ]
null
2023-06-28T14:55:06Z
This is vae-ft-mse-840000-pruned but then cleaned up a bit more. Works exactly like the original but 160mb leaner
Priyesh/ppo-LunarLander-v2
Priyesh
2023-06-28T16:38:19Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-18T22:39:51Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO-MLP results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 247.08 +/- 24.11 name: mean_reward verified: false --- # **PPO-MLP** Agent playing **LunarLander-v2** This is a trained model of a **PPO-MLP** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
weslleylima/setfit-ethos-multilabel-example
weslleylima
2023-06-28T16:23:58Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-06-28T15:21:52Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # weslleylima/setfit-ethos-multilabel-example This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("weslleylima/setfit-ethos-multilabel-example") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
vluz/Generalis_V1
vluz
2023-06-28T16:15:54Z
0
0
null
[ "en", "license:cc0-1.0", "region:us" ]
null
2023-06-28T14:30:48Z
--- license: cc0-1.0 language: - en --- # Generalis V1 <hr> ### Attempt at merging several models v1.5 into one general purpose model. Focus has been put into simple prompts, good one-off generation, muted colours, low memory usage, small model size. It is intended as easy model for use in larger projects where image generation is needed. Published under CC0 <hr> Use example: ```python import torch # Tested with 2.0.1+cu118 from diffusers import StableDiffusionPipeline # <3 # Model location in HF model = "https://huggingface.co/vluz/Generalis_V1/blob/main/Generalis_v1.safetensors" # Create pipe pipe = StableDiffusionPipeline.from_ckpt(model, torch_dtype=torch.float16, safety_checker=None, feature_extractor=None, requires_safety_checker=False,) # Cleanup del pipe.vae.encoder torch.cuda.empty_cache() # Send to GPU pipe = pipe.to("cuda") # Optimize for low vram use and clear cache again pipe.enable_vae_tiling() pipe.enable_attention_slicing("max") pipe.enable_xformers_memory_efficient_attention(attention_op=None) pipe.unet.to(memory_format=torch.channels_last) pipe.enable_sequential_cpu_offload() torch.cuda.empty_cache() # Set a prompt prompt = "a cat" # Generate image based on prompt image = pipe(prompt).images[0] # Save result image to disk image.save("cat.png") ```
sharpbai/baichuan-vicuna-7b
sharpbai
2023-06-28T16:14:33Z
7
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "zh", "en", "dataset:anon8231489123/ShareGPT_Vicuna_unfiltered", "dataset:QingyiSi/Alpaca-CoT", "dataset:mhhmm/leetcode-solutions-python", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-06-18T04:00:48Z
--- language: - zh - en pipeline_tag: text-generation inference: false datasets: - anon8231489123/ShareGPT_Vicuna_unfiltered - QingyiSi/Alpaca-CoT - mhhmm/leetcode-solutions-python --- # baichuan-vicuna-7b A 405M split weight version of [fireballoon/baichuan-vicuna-7b](https://huggingface.co/fireballoon/baichuan-vicuna-7b)
sharpbai/open_llama_13b
sharpbai
2023-06-28T16:14:25Z
25
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:togethercomputer/RedPajama-Data-1T", "arxiv:2302.13971", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-22T05:07:23Z
--- license: apache-2.0 datasets: - togethercomputer/RedPajama-Data-1T --- # open_llama_13b *The weight file is split into chunks with a size of 650MB for convenient and fast parallel downloads* A 650MB split weight version of [openlm-research/open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b) The original model card is down below ----------------------------------------- # OpenLLaMA: An Open Reproduction of LLaMA In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details. ## Weights Release, License and Usage We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license. ### Loading the Weights with Hugging Face Transformers Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that the auto-converted fast tokenizer sometimes gives incorrect tokenizations.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage. ```python import torch from transformers import LlamaTokenizer, LlamaForCausalLM # model_path = 'openlm-research/open_llama_3b' # model_path = 'openlm-research/open_llama_7b' model_path = 'openlm-research/open_llama_13b' tokenizer = LlamaTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map='auto', ) prompt = 'Q: What is the largest animal?\nA:' input_ids = tokenizer(prompt, return_tensors="pt").input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens=32 ) print(tokenizer.decode(generation_output[0])) ``` For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama). ### Evaluating with LM-Eval-Harness The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below: ```python tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained( pretrained if tokenizer is None else tokenizer, revision=revision + ("/" + subfolder if subfolder is not None else ""), use_fast=False ) ``` ### Loading the Weights with EasyLM For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. Note that we use BOS (beginning of sentence) token (id=1) during training, so it is best to prepend this token for best performance during few-shot evaluation. ## Dataset and Training We train our models on the [RedPajama](https://www.together.xyz/blog/redpajama) dataset released by [Together](https://www.together.xyz/), which is a reproduction of the LLaMA training dataset containing over 1.2 trillion tokens. We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs the RedPajama dataset rather than the one utilized by the original LLaMA. We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model. ## Evaluation We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/). The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks. | **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 7B | OpenLLaMA 3B | OpenLLaMA 13B | | ---------------------- | -------- | -------- | --------- | ------------ | ------------ | ------------- | | anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.33 | 0.33 | 0.33 | | anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.36 | 0.32 | 0.33 | | anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.38 | 0.35 | 0.40 | | arc_challenge/acc | 0.34 | 0.39 | 0.44 | 0.37 | 0.34 | 0.41 | | arc_challenge/acc_norm | 0.37 | 0.41 | 0.44 | 0.38 | 0.37 | 0.44 | | arc_easy/acc | 0.67 | 0.68 | 0.75 | 0.72 | 0.69 | 0.75 | | arc_easy/acc_norm | 0.62 | 0.52 | 0.59 | 0.68 | 0.65 | 0.70 | | boolq/acc | 0.66 | 0.75 | 0.71 | 0.71 | 0.68 | 0.75 | | hellaswag/acc | 0.50 | 0.56 | 0.59 | 0.53 | 0.49 | 0.56 | | hellaswag/acc_norm | 0.66 | 0.73 | 0.76 | 0.72 | 0.67 | 0.76 | | openbookqa/acc | 0.29 | 0.29 | 0.31 | 0.30 | 0.27 | 0.31 | | openbookqa/acc_norm | 0.38 | 0.41 | 0.42 | 0.40 | 0.40 | 0.43 | | piqa/acc | 0.75 | 0.78 | 0.79 | 0.76 | 0.75 | 0.77 | | piqa/acc_norm | 0.76 | 0.78 | 0.79 | 0.77 | 0.76 | 0.79 | | record/em | 0.88 | 0.91 | 0.92 | 0.89 | 0.88 | 0.91 | | record/f1 | 0.89 | 0.91 | 0.92 | 0.90 | 0.89 | 0.91 | | rte/acc | 0.54 | 0.56 | 0.69 | 0.60 | 0.58 | 0.64 | | truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.25 | 0.23 | 0.22 | 0.25 | | truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.40 | 0.35 | 0.35 | 0.38 | | wic/acc | 0.50 | 0.50 | 0.50 | 0.51 | 0.48 | 0.47 | | winogrande/acc | 0.64 | 0.68 | 0.70 | 0.67 | 0.62 | 0.70 | | Average | 0.52 | 0.55 | 0.57 | 0.55 | 0.53 | 0.57 | We removed the task CB and WSC from our benchmark, as our model performs suspiciously well on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set. ## Contact We would love to get feedback from the community. If you have any questions, please open an issue or contact us. OpenLLaMA is developed by: [Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research. *Equal Contribution ## Acknowledgment We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback. The OpenLLaMA 13B model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support. ## Reference If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX: ``` @software{openlm2023openllama, author = {Geng, Xinyang and Liu, Hao}, title = {OpenLLaMA: An Open Reproduction of LLaMA}, month = May, year = 2023, url = {https://github.com/openlm-research/open_llama} } ``` ``` @software{together2023redpajama, author = {Together Computer}, title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset}, month = April, year = 2023, url = {https://github.com/togethercomputer/RedPajama-Data} } ``` ``` @article{touvron2023llama, title={Llama: Open and efficient foundation language models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ```
sharpbai/open_llama_7b
sharpbai
2023-06-28T16:14:21Z
9
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:togethercomputer/RedPajama-Data-1T", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-17T16:23:41Z
--- license: apache-2.0 datasets: - togethercomputer/RedPajama-Data-1T --- # OpenLLaMA: An Open Reproduction of LLaMA A 405M split weight version of [openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b)
sharpbai/chinese-alpaca-plus-lora-7b-merged
sharpbai
2023-06-28T16:14:18Z
15
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "zh", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-13T09:02:38Z
--- license: other language: - zh --- # Chinese-Alpaca-Plus-LoRA-7B This model is merged from [chinese-alpaca-plus-lora-7b](https://huggingface.co/ziqingyang/chinese-alpaca-plus-lora-7b) The weight file is split into chunks with a size of 405M for convenient and fast parallel downloads
sharpbai/alpaca-7b-merged
sharpbai
2023-06-28T16:14:08Z
76
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "alpaca", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-14T15:56:13Z
--- license: other tags: - alpaca --- ### Stanford Alpaca-7B-Merged *The weight file is split into chunks with a size of 405M for convenient and fast parallel downloads* This repo hosts the merged weight for [Stanford Alpaca-7B](https://github.com/tatsu-lab/stanford_alpaca/) that can be used directly. Below is the original model card information. ----------------------- ### Stanford Alpaca-7B This repo hosts the weight diff for [Stanford Alpaca-7B](https://github.com/tatsu-lab/stanford_alpaca/) that can be used to reconstruct the original model weights when applied to Meta's LLaMA weights. To recover the original Alpaca-7B weights, follow these steps: ```text 1. Convert Meta's released weights into huggingface format. Follow this guide: https://huggingface.co/docs/transformers/main/model_doc/llama 2. Make sure you cloned the released weight diff into your local machine. The weight diff is located at: https://huggingface.co/tatsu-lab/alpaca-7b/tree/main 3. Run this function with the correct paths. E.g., python weight_diff.py recover --path_raw <path_to_step_1_dir> --path_diff <path_to_step_2_dir> --path_tuned <path_to_store_recovered_weights> ``` Once step 3 completes, you should have a directory with the recovered weights, from which you can load the model like the following ```python import transformers alpaca_model = transformers.AutoModelForCausalLM.from_pretrained("<path_to_store_recovered_weights>") alpaca_tokenizer = transformers.AutoTokenizer.from_pretrained("<path_to_store_recovered_weights>") ```
sharpbai/vicuna-13b-v1.3
sharpbai
2023-06-28T16:14:05Z
12
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2302.13971", "arxiv:2306.05685", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-06-20T09:24:02Z
--- inference: false --- # vicuna-13b-v1.3 *The weight file is split into chunks with a size of 650MB for convenient and fast parallel downloads* A 650M split weight version of [lmsys/vicuna-13b-v1.3](https://huggingface.co/lmsys/vicuna-13b-v1.3) The original model card is down below ----------------------------------------- # Vicuna Model Card ## Model Details Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. - **Developed by:** [LMSYS](https://lmsys.org/) - **Model type:** An auto-regressive language model based on the transformer architecture. - **License:** Non-commercial license - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971). ### Model Sources - **Repository:** https://github.com/lm-sys/FastChat - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/ - **Paper:** https://arxiv.org/abs/2306.05685 - **Demo:** https://chat.lmsys.org/ ## Uses The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## How to Get Started with the Model Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights. APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api. ## Training Details Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning. The training data is around 140K conversations collected from ShareGPT.com. See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf). ## Evaluation Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf). ## Difference between different versions of Vicuna See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
sharpbai/vicuna-7b-v1.3
sharpbai
2023-06-28T16:14:00Z
221
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2302.13971", "arxiv:2306.05685", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-06-20T09:01:29Z
--- inference: false --- # vicuna-7b-v1.3 *The weight file is split into chunks with a size of 405M for convenient and fast parallel downloads* A 405M split weight version of [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) Original model is down below # Vicuna Model Card ## Model Details Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. - **Developed by:** [LMSYS](https://lmsys.org/) - **Model type:** An auto-regressive language model based on the transformer architecture. - **License:** Non-commercial license - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971). ### Model Sources - **Repository:** https://github.com/lm-sys/FastChat - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/ - **Paper:** https://arxiv.org/abs/2306.05685 - **Demo:** https://chat.lmsys.org/ ## Uses The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## How to Get Started with the Model Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights. APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api. ## Training Details Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning. The training data is around 140K conversations collected from ShareGPT.com. See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf). ## Evaluation Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf). ## Difference between different versions of Vicuna See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
minhcrafters/DialoGPT-small-Fukuya
minhcrafters
2023-06-28T15:59:29Z
120
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "conversational", "en", "license:gpl-3.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-28T15:20:56Z
--- license: gpl-3.0 tags: - conversational language: - en ---
YakovElm/IntelDAOS_15_BERT_Over_Sampling
YakovElm
2023-06-28T15:55:27Z
62
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-28T15:54:45Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS_15_BERT_Over_Sampling results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS_15_BERT_Over_Sampling This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0704 - Train Accuracy: 0.9820 - Validation Loss: 0.7296 - Validation Accuracy: 0.8108 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5199 | 0.7373 | 0.5990 | 0.6517 | 0 | | 0.2247 | 0.9276 | 0.8030 | 0.7357 | 1 | | 0.0704 | 0.9820 | 0.7296 | 0.8108 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
czz23/SplitStatement-setfit-model-2epoch
czz23
2023-06-28T15:38:27Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "albert", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-06-28T15:38:23Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # czz23/SplitStatement-setfit-model-2epoch This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("czz23/SplitStatement-setfit-model-2epoch") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
kvarnalidis/q-FrozenLake-v1-4x4-noSlippery
kvarnalidis
2023-06-28T15:29:54Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-28T15:29:51Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="kvarnalidis/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-3b
h2oai
2023-06-28T15:28:51Z
351
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "en", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-06-28T13:55:52Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: >- https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico license: apache-2.0 datasets: - OpenAssistant/oasst1 --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) - Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28) personalized ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed. ```bash pip install transformers==4.30.2 pip install accelerate==0.20.3 pip install torch==2.0.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline( model="h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-3b", torch_dtype="auto", trust_remote_code=True, use_fast=False, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?</s><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`. ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-3b", use_fast=False, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-3b", torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-3b" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?</s><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=False, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( **inputs, min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Model Architecture ``` LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(32000, 3200, padding_idx=0) (layers): ModuleList( (0-25): 26 x LlamaDecoderLayer( (self_attn): LlamaAttention( (q_proj): Linear(in_features=3200, out_features=3200, bias=False) (k_proj): Linear(in_features=3200, out_features=3200, bias=False) (v_proj): Linear(in_features=3200, out_features=3200, bias=False) (o_proj): Linear(in_features=3200, out_features=3200, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=3200, out_features=8640, bias=False) (down_proj): Linear(in_features=8640, out_features=3200, bias=False) (up_proj): Linear(in_features=3200, out_features=8640, bias=False) (act_fn): SiLUActivation() ) (input_layernorm): LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) ) (norm): LlamaRMSNorm() ) (lm_head): Linear(in_features=3200, out_features=32000, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
LordSomen/dqn-SpaceInvadersNoFrameskip-v4_1696
LordSomen
2023-06-28T15:21:25Z
4
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-28T15:20:48Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 687.50 +/- 283.55 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga LordSomen -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga LordSomen -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga LordSomen ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
filipps/model
filipps
2023-06-28T15:12:28Z
0
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-27T16:20:11Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - filipps/model This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
YakovElm/IntelDAOS_10_BERT_Over_Sampling
YakovElm
2023-06-28T15:11:31Z
61
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-28T15:10:51Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: IntelDAOS_10_BERT_Over_Sampling results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # IntelDAOS_10_BERT_Over_Sampling This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0340 - Train Accuracy: 0.9918 - Validation Loss: 1.2417 - Validation Accuracy: 0.6727 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5670 | 0.6984 | 0.5649 | 0.7057 | 0 | | 0.1840 | 0.9359 | 0.4907 | 0.8258 | 1 | | 0.0340 | 0.9918 | 1.2417 | 0.6727 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
xzuyn/GPT2-RPGPT-8.48M
xzuyn
2023-06-28T15:06:20Z
255
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "en", "dataset:practicaldreamer/RPGPT_PublicDomain-alpaca", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-27T05:46:00Z
--- language: - en pipeline_tag: text-generation datasets: - practicaldreamer/RPGPT_PublicDomain-alpaca --- # Latest Version: *111,577* / *111,577* Steps (Epoch 1). - 28,563,712 / 28,563,712 tokens seen (Epoch 1). - 0 / 28,563,712 tokens seen (Epoch 2). - 0 / 28,563,712 tokens seen (Epoch 3). # Model Info: - Trained from scratch. - 8.48M parameters. - 256 context length. - Test model. Likely needs at least 512 context to function "properly". - Trained with a dataset that overlaps by a quarter of the context length (Shifts by 64 tokens for each subset). # Format: ``` <|characters|> Nancy (Oliver Twist): Female, early 20s, ESFP, Cockney accent. Loyal... Mr. Edward Hyde (Dr. Jekyll and Mr. Hyde): Male, late 30s, ESTP... <|scenario|> In an alternate Victorian London where the city's poor and downtrodden... <|response|> Nancy: *gently brushes her fingers across the worn book spine, before suddenly stopping as she feels another hand... Mr. Edward Hyde: *glances at Nancy with a sinister grin, slowly pulling his hand back* No need to apologize, miss... ``` # Example Output: Step 111,577. Input `<|characters|>` as a prompt, set max tokens to 256, amount to generate to 253. This generated up to `just our circumstances before us`. Then I set amount to generate to 128 to keep half of the text in context. This generated up to `A wise suggestion,`. I then lowered the amount to generate to 64. That generated up to the ending `know of our current situation?`. ``` <|characters|> Mrs. Samsa (The Metamorphosis): Female, middle-aged, ISFJ, German accent, compassionate mother struggling to cope with her son's transformation, and eventually succumbs to the family's financial and emotional burdens. <|scenario|> In a twist of fate, Mrs. Samsa finds herself transported back in time to time and space. Evangelist, who is on an isolated haven where he encounters Mrs. Samsa, by a different tale. Mrs. Samsa, still burdened by the weight of his past actions, must confront the difficult path ahead. Through their conversations, they find common ground in their own worlds, allowing them to continue seeking wisdom from each other and finding solace in one another's words. The dialogue between these two characters will offer insight into each other's worlds as well as how their experiences have shaped them in this whimsical world. <|response|> Mrs. Samsa: *approaches the peculiar sights around her, eyes widening in surprise* Oh dear, I couldn't help but notice you not! I've never seen my fair life, but I'm starting to see my son. Are you here in this peculiar place? Evangelist: *smiles warmly at Mrs. Samsa* Yes, we are indeed more than just our circumstances before us. And it is your place of wisdom and understanding. *opens the book, his eyes sparkling with excitement* Mrs. Samsa: *slowly opens a small book of the book* I must confess, Evangelist, I've never had a different view of this place. But it feels like this before our worlds find such things that we've discovered. Evangelist: *nods thoughtfully* You possess great wisdom, Mrs. Samsa. It seems we are both searching for a way to escape this peculiar library. Perhaps that is a sign of my spiritual journey towards you. Mrs. Samsa: *eyes widen in curiosity* A wise suggestion, Candide. I can't help but feel a sense of serenity amidst my own life. Evangelist: *smiles warmly* Of course, Mrs. Samsa. The path to enlightenment is filled with joy and understanding. Now, tell me more about this ancient book. What do you need to know of our current situation? ``` # Config: Learning rate may have been too high, not sure. Average loss at step 111,557 had an averge loss of 2.1. ``` batch_size: 1 dropout: 0 learning_rate: 0.0001 max_length: 256 n_embed: 256 n_head: 8 n_layer: 8 vocab_size: 8192 ```
amm297/my_awesome_peft_model
amm297
2023-06-28T14:48:41Z
24
0
peft
[ "peft", "RefinedWebModel", "generated_from_trainer", "text-generation", "custom_code", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2023-06-28T10:55:56Z
--- license: other library_name: peft pipeline_tag: text-generation tags: - generated_from_trainer --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0.dev0 - Transformers 4.31.0.dev0 - Pytorch 2.0.1 - Datasets 2.13.0 - Tokenizers 0.13.3
heiheiknight/ddpm-ema-pokemon-64
heiheiknight
2023-06-28T14:45:33Z
1
0
diffusers
[ "diffusers", "tensorboard", "en", "license:afl-3.0", "diffusers:DDPMPipeline", "region:us" ]
null
2023-06-28T10:18:26Z
--- license: afl-3.0 language: - en ---