modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-01 06:28:43
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
546 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-01 06:27:36
card
stringlengths
11
1.01M
aroot/mbart-finetuned-eng-guj
aroot
2023-06-30T14:30:51Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-06-22T00:44:05Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: mbart-finetuned-eng-guj results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-finetuned-eng-guj This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.5996 - Bleu: 1.8882 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.11.0
team-lucid/hubert-large-korean
team-lucid
2023-06-30T14:27:34Z
474
10
transformers
[ "transformers", "pytorch", "jax", "safetensors", "hubert", "feature-extraction", "speech", "audio", "automatic-speech-recognition", "custom_code", "ko", "arxiv:2106.07447", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-06-04T07:13:38Z
--- license: apache-2.0 language: - ko library_name: transformers pipeline_tag: automatic-speech-recognition tags: - speech - audio --- # hubert-large-korean ## Model Details Hubert(Hidden-Unit BERT)๋Š” Facebook์—์„œ ์ œ์•ˆํ•œ Speech Representation Learning ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. Hubert๋Š” ๊ธฐ์กด์˜ ์Œ์„ฑ ์ธ์‹ ๋ชจ๋ธ๊ณผ ๋‹ฌ๋ฆฌ, ์Œ์„ฑ ์‹ ํ˜ธ๋ฅผ raw waveform์—์„œ ๋ฐ”๋กœ ํ•™์Šตํ•˜๋Š” self-supervised learning ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ์—ฐ๊ตฌ๋Š” ๊ตฌ๊ธ€์˜ TPU Research Cloud(TRC)๋ฅผ ํ†ตํ•ด ์ง€์›๋ฐ›์€ Cloud TPU๋กœ ํ•™์Šต๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ### Model Description <table> <tr> <td colspan="2"></td> <td>Base</td> <td>Large</td> </tr> <tr> <td rowspan="3">CNN Encoder</td> <td>strides</td> <td colspan="2">5, 2, 2, 2, 2, 2, 2</td> </tr> <tr> <td>kernel width</td> <td colspan="2">10, 3, 3, 3, 3, 2, 2</td> </tr> <tr> <td>channel</td> <td colspan="2">512</td> </tr> <tr> <td rowspan="4">Transformer Encoder</td> <td>Layer</td> <td>12</td> <td>24</td> </tr> <tr> <td>embedding dim</td> <td>768</td> <td>1024</td> </tr> <tr> <td>inner FFN dim</td> <td>3072</td> <td>4096</td> </tr> <tr> <td>attention heads</td> <td>8</td> <td>16</td> </tr> <tr> <td>Projection</td> <td>dim</td> <td>256</td> <td>768</td> </tr> <tr> <td colspan="2">Params</td> <td>95M</td> <td>317M </td> </tr> </table> ## How to Get Started with the Model ### Pytorch ```py import torch from transformers import HubertModel model = HubertModel.from_pretrained("team-lucid/hubert-large-korean") wav = torch.ones(1, 16000) outputs = model(wav) print(f"Input: {wav.shape}") # [1, 16000] print(f"Output: {outputs.last_hidden_state.shape}") # [1, 49, 768] ``` ### JAX/Flax ```py import jax.numpy as jnp from transformers import FlaxAutoModel model = FlaxAutoModel.from_pretrained("team-lucid/hubert-large-korean", trust_remote_code=True) wav = jnp.ones((1, 16000)) outputs = model(wav) print(f"Input: {wav.shape}") # [1, 16000] print(f"Output: {outputs.last_hidden_state.shape}") # [1, 49, 768] ``` ## Training Details ### Training Data ํ•ด๋‹น ๋ชจ๋ธ์€ ๊ณผํ•™๊ธฐ์ˆ ์ •๋ณดํ†ต์‹ ๋ถ€์˜ ์žฌ์›์œผ๋กœ ํ•œ๊ตญ์ง€๋Šฅ์ •๋ณด์‚ฌํšŒ์ง„ํฅ์›์˜ ์ง€์›์„ ๋ฐ›์•„ ๊ตฌ์ถ•๋œ [์ž์œ ๋Œ€ํ™” ์Œ์„ฑ(์ผ๋ฐ˜๋‚จ์—ฌ)](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=109), [๋‹คํ™”์ž ์Œ์„ฑํ•ฉ์„ฑ ๋ฐ์ดํ„ฐ](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=542), [๋ฐฉ์†ก ์ฝ˜ํ…์ธ  ๋Œ€ํ™”์ฒด ์Œ์„ฑ์ธ์‹ ๋ฐ์ดํ„ฐ](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=463) ์—์„œ ์•ฝ 4,000์‹œ๊ฐ„์„ ์ถ”์ถœํ•ด ํ•™์Šต๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ### Training Procedure [์› ๋…ผ๋ฌธ](https://arxiv.org/pdf/2106.07447.pdf)๊ณผ ๋™์ผํ•˜๊ฒŒ MFCC ๊ธฐ๋ฐ˜์œผ๋กœ Base ๋ชจ๋ธ์„ ํ•™์Šตํ•œ ๋‹ค์Œ, 500 cluster๋กœ k-means๋ฅผ ์ˆ˜ํ–‰ํ•ด ๋‹ค์‹œ Base์™€ Large ๋ชจ๋ธ์„ ํ•™์Šตํ–ˆ์Šต๋‹ˆ๋‹ค. #### Training Hyperparameters | Hyperparameter | Base | Large | |:--------------------|---------|--------:| | Warmup Steps | 32,000 | 32,000 | | Learning Rates | 5e-4 | 1.5e-3 | | Batch Size | 128 | 128 | | Weight Decay | 0.01 | 0.01 | | Max Steps | 400,000 | 400,000 | | Learning Rate Decay | 0.1 | 0.1 | | \\(Adam\beta_1\\) | 0.9 | 0.9 | | \\(Adam\beta_2\\) | 0.99 | 0.99 |
jiekeshi/CodeBERT-Adversarial-Finetuned-Vulnerability-Prediction
jiekeshi
2023-06-30T14:23:23Z
0
1
null
[ "pytorch", "arxiv:2201.08698", "license:mit", "region:us" ]
null
2023-06-30T14:12:57Z
--- license: mit --- This is the adversarially finetuned version of CodeBERT that has been trained for the Vulnerability Prediction task using [Devign](https://sites.google.com/view/devign) dataset. The adversarial examples used for finetuning are generated from our ICSE 2022 paper titled ["**Natural Attack for Pre-trained Models of Code**"](https://arxiv.org/abs/2201.08698). If you are interested in using this model, please check our **GitHub repository: https://github.com/soarsmu/attack-pretrain-models-of-code**. If you use the model or any code from our repo in your paper, please kindly cite: ``` @inproceedings{10.1145/3510003.3510146, author = {Yang, Zhou and Shi, Jieke and He, Junda and Lo, David}, title = {Natural Attack for Pre-Trained Models of Code}, year = {2022}, isbn = {9781450392211}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3510003.3510146}, doi = {10.1145/3510003.3510146}, abstract = {Pre-trained models of code have achieved success in many important software engineering tasks. However, these powerful models are vulnerable to adversarial attacks that slightly perturb model inputs to make a victim model produce wrong outputs. Current works mainly attack models of code with examples that preserve operational program semantics but ignore a fundamental requirement for adversarial example generation: perturbations should be natural to human judges, which we refer to as naturalness requirement.In this paper, we propose ALERT (Naturalness Aware Attack), a black-box attack that adversarially transforms inputs to make victim models produce wrong outputs. Different from prior works, this paper considers the natural semantic of generated examples at the same time as preserving the operational semantic of original inputs. Our user study demonstrates that human developers consistently consider that adversarial examples generated by ALERT are more natural than those generated by the state-of-the-art work by Zhang et al. that ignores the naturalness requirement. On attacking CodeBERT, our approach can achieve attack success rates of 53.62\%, 27.79\%, and 35.78\% across three downstream tasks: vulnerability prediction, clone detection and code authorship attribution. On GraphCodeBERT, our approach can achieve average success rates of 76.95\%, 7.96\% and 61.47\% on the three tasks. The above outperforms the baseline by 14.07\% and 18.56\% on the two pre-trained models on average. Finally, we investigated the value of the generated adversarial examples to harden victim models through an adversarial fine-tuning procedure and demonstrated the accuracy of CodeBERT and GraphCodeBERT against ALERT-generated adversarial examples increased by 87.59\% and 92.32\%, respectively.}, booktitle = {Proceedings of the 44th International Conference on Software Engineering}, pages = {1482โ€“1493}, numpages = {12}, keywords = {pre-trained models, adversarial attack, genetic algorithm}, location = {Pittsburgh, Pennsylvania}, series = {ICSE '22} } ```
jiekeshi/GraphCodeBERT-Adversarial-Finetuned-Vulnerability-Prediction
jiekeshi
2023-06-30T14:22:24Z
0
0
null
[ "pytorch", "arxiv:2201.08698", "license:mit", "region:us" ]
null
2023-06-30T14:16:04Z
--- license: mit --- This is the adversarially finetuned version of GraphCodeBERT that has been trained for the Vulnerability Prediction task using [Devign](https://sites.google.com/view/devign) dataset. The adversarial examples used for finetuning are generated from our ICSE 2022 paper titled ["**Natural Attack for Pre-trained Models of Code**"](https://arxiv.org/abs/2201.08698). If you are interested in using this model, please check our **GitHub repository: https://github.com/soarsmu/attack-pretrain-models-of-code**. If you use the model or any code from our repo in your paper, please kindly cite: ``` @inproceedings{10.1145/3510003.3510146, author = {Yang, Zhou and Shi, Jieke and He, Junda and Lo, David}, title = {Natural Attack for Pre-Trained Models of Code}, year = {2022}, isbn = {9781450392211}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3510003.3510146}, doi = {10.1145/3510003.3510146}, abstract = {Pre-trained models of code have achieved success in many important software engineering tasks. However, these powerful models are vulnerable to adversarial attacks that slightly perturb model inputs to make a victim model produce wrong outputs. Current works mainly attack models of code with examples that preserve operational program semantics but ignore a fundamental requirement for adversarial example generation: perturbations should be natural to human judges, which we refer to as naturalness requirement.In this paper, we propose ALERT (Naturalness Aware Attack), a black-box attack that adversarially transforms inputs to make victim models produce wrong outputs. Different from prior works, this paper considers the natural semantic of generated examples at the same time as preserving the operational semantic of original inputs. Our user study demonstrates that human developers consistently consider that adversarial examples generated by ALERT are more natural than those generated by the state-of-the-art work by Zhang et al. that ignores the naturalness requirement. On attacking CodeBERT, our approach can achieve attack success rates of 53.62\%, 27.79\%, and 35.78\% across three downstream tasks: vulnerability prediction, clone detection and code authorship attribution. On GraphCodeBERT, our approach can achieve average success rates of 76.95\%, 7.96\% and 61.47\% on the three tasks. The above outperforms the baseline by 14.07\% and 18.56\% on the two pre-trained models on average. Finally, we investigated the value of the generated adversarial examples to harden victim models through an adversarial fine-tuning procedure and demonstrated the accuracy of CodeBERT and GraphCodeBERT against ALERT-generated adversarial examples increased by 87.59\% and 92.32\%, respectively.}, booktitle = {Proceedings of the 44th International Conference on Software Engineering}, pages = {1482โ€“1493}, numpages = {12}, keywords = {pre-trained models, adversarial attack, genetic algorithm}, location = {Pittsburgh, Pennsylvania}, series = {ICSE '22} } ```
WALIDALI/rahalistaly
WALIDALI
2023-06-30T14:13:00Z
31
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-30T14:09:18Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### rahalistaly Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
aroot/mbart-finetuned-eng-mya
aroot
2023-06-30T14:11:48Z
14
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-06-22T00:43:54Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: mbart-finetuned-eng-mya results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-finetuned-eng-mya This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1303 - Bleu: 3.2753 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.11.0
odiaz1066/LunarLander-v2-LunarLander-Show-seed42
odiaz1066
2023-06-30T14:05:23Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-30T14:05:19Z
--- tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 227.09 +/- 33.24 name: mean_reward verified: false --- # (CleanRL) **DQN** Agent Playing **LunarLander-v2** This is a trained model of a DQN agent playing LunarLander-v2. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/LunarLander-Show.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[LunarLander-Show]" python -m cleanrl_utils.enjoy --exp-name LunarLander-Show --env-id LunarLander-v2 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/odiaz1066/LunarLander-v2-LunarLander-Show-seed42/raw/main/dqn.py curl -OL https://huggingface.co/odiaz1066/LunarLander-v2-LunarLander-Show-seed42/raw/main/pyproject.toml curl -OL https://huggingface.co/odiaz1066/LunarLander-v2-LunarLander-Show-seed42/raw/main/poetry.lock poetry install --all-extras python dqn.py --track --save-model --capture-video --exp-name LunarLander-Show --seed 42 --env-id LunarLander-v2 --upload-model --hf-entity odiaz1066 ``` # Hyperparameters ```python {'batch_size': 128, 'buffer_size': 10000, 'capture_video': True, 'cuda': True, 'end_e': 0.05, 'env_id': 'LunarLander-v2', 'exp_name': 'LunarLander-Show', 'exploration_fraction': 0.5, 'gamma': 0.99, 'hf_entity': 'odiaz1066', 'learning_rate': 0.00025, 'learning_starts': 10000, 'num_envs': 1, 'save_model': True, 'seed': 42, 'start_e': 1, 'target_network_frequency': 500, 'tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 500000, 'track': True, 'train_frequency': 10, 'upload_model': True, 'wandb_entity': None, 'wandb_project_name': 'lagomorph'} ```
lukaszkolodziejczyk/q-FrozenLake-v1-4x4-noSlippery
lukaszkolodziejczyk
2023-06-30T14:01:39Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-30T14:01:37Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="lukaszkolodziejczyk/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
MarieAngeA13/Sentiment_Analysis
MarieAngeA13
2023-06-30T13:58:25Z
5
0
transformers
[ "transformers", "bert", "sentiment", "sentiment-analysis", "text-classification", "en", "endpoints_compatible", "region:us" ]
text-classification
2023-06-23T10:18:35Z
--- language: - en tags: - sentiment - bert - sentiment-analysis - transformers pipeline_tag: text-classification --- User Comment Sentiment Analysis This model aims to analyze user comments on products and extracting the expressed sentiments. User ratings on the internet do not always provide detailed qualitative information about their experience. Therefore, it is important to go beyond these ratings and extract more insightful information that can help a brand improve their product or service. Objective The model utilizes the BERT architecture and is trained on a dataset of user comments with sentiment labels. The model is capable of analyzing comments and extracting sentiments such as positive, negative, or neutral. Features Sentiment Classification: The model can classify user comments into positive, negative, or neutral sentiments, providing an overall indication of the expressed opinion. Improvement Suggestions: In cases where a comment expresses a negative or neutral sentiment, the model suggests an improved version of the text with a more positive sentiment. This can help businesses understand consumer reactions and identify areas for product or service improvement. Usage To use this sentiment analysis system, follow these steps: Install the required dependencies by running the command pip install -r requirements.txt. Once the training is complete, the best-trained model will be saved in the best_model_state.bin file. To make predictions on new comments, use the analyze_sentiment(comment_text) function, replacing comment_text with the actual comment text to analyze. The model will return the sentiment expressed in the comment. To suggest an improved version of a comment, use the suggest_improved_text(comment_text) function. If the comment expresses a negative or neutral sentiment, the function will generate an improved version of the text with a more positive sentiment. Otherwise, the original text will be returned without modification.
jiekeshi/CodeBERT-25MB-Clone-Detection
jiekeshi
2023-06-30T13:55:48Z
0
0
null
[ "pytorch", "arxiv:2208.07120", "license:mit", "region:us" ]
null
2023-06-30T13:06:06Z
--- license: mit --- This is the 25 MB compressed version of CodeBERT that has been fine-tuned for the Clone Detection task using [BigCloneBench](https://github.com/clonebench/BigCloneBench.git) dataset. The compression is based on our ASE 2022 paper named ["**Compressing Pre-trained Models of Code into 3 MB**"](https://arxiv.org/abs/2208.07120). If you are interested in using this model, please check our **GitHub repository: https://github.com/soarsmu/Compressor.git**. If you use the model or any code from our repo in your paper, please kindly cite: ``` @inproceedings{shi2022compressing, author = {Shi, Jieke and Yang, Zhou and Xu, Bowen and Kang, Hong Jin and Lo, David}, title = {Compressing Pre-Trained Models of Code into 3 MB}, year = {2023}, isbn = {9781450394758}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3551349.3556964}, doi = {10.1145/3551349.3556964}, booktitle = {Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering}, articleno = {24}, numpages = {12}, keywords = {Pre-Trained Models, Model Compression, Genetic Algorithm}, location = {Rochester, MI, USA}, series = {ASE '22} } ```
jiekeshi/CodeBERT-3MB-Clone-Detection
jiekeshi
2023-06-30T13:55:31Z
0
0
null
[ "pytorch", "arxiv:2208.07120", "license:mit", "region:us" ]
null
2023-06-30T13:00:19Z
--- license: mit --- This is the 3 MB compressed version of CodeBERT that has been fine-tuned for the Clone Detection task using [BigCloneBench](https://github.com/clonebench/BigCloneBench.git) dataset. The compression is based on our ASE 2022 paper named ["**Compressing Pre-trained Models of Code into 3 MB**"](https://arxiv.org/abs/2208.07120). If you are interested in using this model, please check our **GitHub repository: https://github.com/soarsmu/Compressor.git**. If you use the model or any code from our repo in your paper, please kindly cite: ``` @inproceedings{shi2022compressing, author = {Shi, Jieke and Yang, Zhou and Xu, Bowen and Kang, Hong Jin and Lo, David}, title = {Compressing Pre-Trained Models of Code into 3 MB}, year = {2023}, isbn = {9781450394758}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3551349.3556964}, doi = {10.1145/3551349.3556964}, booktitle = {Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering}, articleno = {24}, numpages = {12}, keywords = {Pre-Trained Models, Model Compression, Genetic Algorithm}, location = {Rochester, MI, USA}, series = {ASE '22} } ```
jiekeshi/CodeBERT-50MB-Clone-Detection
jiekeshi
2023-06-30T13:55:11Z
0
0
null
[ "pytorch", "arxiv:2208.07120", "license:mit", "region:us" ]
null
2023-06-30T13:08:32Z
--- license: mit --- This is the 50 MB compressed version of CodeBERT that has been fine-tuned for the Clone Detection task using [BigCloneBench](https://github.com/clonebench/BigCloneBench.git) dataset. The compression is based on our ASE 2022 paper named ["**Compressing Pre-trained Models of Code into 3 MB**"](https://arxiv.org/abs/2208.07120). If you are interested in using this model, please check our **GitHub repository: https://github.com/soarsmu/Compressor.git**. If you use the model or any code from our repo in your paper, please kindly cite: ``` @inproceedings{shi2022compressing, author = {Shi, Jieke and Yang, Zhou and Xu, Bowen and Kang, Hong Jin and Lo, David}, title = {Compressing Pre-Trained Models of Code into 3 MB}, year = {2023}, isbn = {9781450394758}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3551349.3556964}, doi = {10.1145/3551349.3556964}, booktitle = {Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering}, articleno = {24}, numpages = {12}, keywords = {Pre-Trained Models, Model Compression, Genetic Algorithm}, location = {Rochester, MI, USA}, series = {ASE '22} } ```
jiekeshi/GraphCodeBERT-3MB-Clone-Detection
jiekeshi
2023-06-30T13:54:55Z
0
0
null
[ "pytorch", "arxiv:2208.07120", "license:mit", "region:us" ]
null
2023-06-30T13:11:01Z
--- license: mit --- This is the 3 MB compressed version of GraphCodeBERT that has been fine-tuned for the Clone Detection task using [BigCloneBench](https://github.com/clonebench/BigCloneBench.git) dataset. The compression is based on our ASE 2022 paper named ["**Compressing Pre-trained Models of Code into 3 MB**"](https://arxiv.org/abs/2208.07120). If you are interested in using this model, please check our **GitHub repository: https://github.com/soarsmu/Compressor.git**. If you use the model or any code from our repo in your paper, please kindly cite: ``` @inproceedings{shi2022compressing, author = {Shi, Jieke and Yang, Zhou and Xu, Bowen and Kang, Hong Jin and Lo, David}, title = {Compressing Pre-Trained Models of Code into 3 MB}, year = {2023}, isbn = {9781450394758}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3551349.3556964}, doi = {10.1145/3551349.3556964}, booktitle = {Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering}, articleno = {24}, numpages = {12}, keywords = {Pre-Trained Models, Model Compression, Genetic Algorithm}, location = {Rochester, MI, USA}, series = {ASE '22} } ```
jiekeshi/CodeBERT-3MB-Vulnerability-Prediction
jiekeshi
2023-06-30T13:51:25Z
0
0
null
[ "pytorch", "arxiv:2208.07120", "license:mit", "region:us" ]
null
2023-06-30T12:56:34Z
--- license: mit --- This is the 3 MB compressed version of GraphCodeBERT that has been fine-tuned for the Vulnerability Prediction task using [Devign](https://sites.google.com/view/devign) dataset. The compression is based on our ASE 2022 paper named ["**Compressing Pre-trained Models of Code into 3 MB**"](https://arxiv.org/abs/2208.07120). If you are interested in using this model, please check our **GitHub repository: https://github.com/soarsmu/Compressor.git**. If you use the model or any code from our repo in your paper, please kindly cite: ``` @inproceedings{shi2022compressing, author = {Shi, Jieke and Yang, Zhou and Xu, Bowen and Kang, Hong Jin and Lo, David}, title = {Compressing Pre-Trained Models of Code into 3 MB}, year = {2023}, isbn = {9781450394758}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3551349.3556964}, doi = {10.1145/3551349.3556964}, booktitle = {Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering}, articleno = {24}, numpages = {12}, keywords = {Pre-Trained Models, Model Compression, Genetic Algorithm}, location = {Rochester, MI, USA}, series = {ASE '22} } ```
jiekeshi/CodeBERT-50MB-Vulnerability-Prediction
jiekeshi
2023-06-30T13:50:33Z
0
2
null
[ "pytorch", "arxiv:2208.07120", "license:mit", "region:us" ]
null
2023-06-30T13:04:03Z
--- license: mit --- This is the 50 MB compressed version of CodeBERT that has been fine-tuned for the Vulnerability Prediction task using [Devign](https://sites.google.com/view/devign) dataset. The compression is based on our ASE 2022 paper named ["**Compressing Pre-trained Models of Code into 3 MB**"](https://arxiv.org/abs/2208.07120). If you are interested in using this model, please check our **GitHub repository: https://github.com/soarsmu/Compressor.git**. If you use the model or any code from our repo in your paper, please kindly cite: ``` @inproceedings{shi2022compressing, author = {Shi, Jieke and Yang, Zhou and Xu, Bowen and Kang, Hong Jin and Lo, David}, title = {Compressing Pre-Trained Models of Code into 3 MB}, year = {2023}, isbn = {9781450394758}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3551349.3556964}, doi = {10.1145/3551349.3556964}, booktitle = {Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering}, articleno = {24}, numpages = {12}, keywords = {Pre-Trained Models, Model Compression, Genetic Algorithm}, location = {Rochester, MI, USA}, series = {ASE '22} } ```
jiekeshi/GraphCodeBERT-50MB-Vulnerability-Prediction
jiekeshi
2023-06-30T13:47:52Z
0
0
null
[ "pytorch", "arxiv:2208.07120", "license:mit", "region:us" ]
null
2023-06-30T13:22:07Z
--- license: mit --- This is the 50 MB compressed version of GraphCodeBERT that has been fine-tuned for the Vulnerability Prediction task using [Devign](https://sites.google.com/view/devign) dataset. The compression is based on our ASE 2022 paper named ["**Compressing Pre-trained Models of Code into 3 MB**"](https://arxiv.org/abs/2208.07120). If you are interested in using this model, please check our **GitHub repository: https://github.com/soarsmu/Compressor.git**. If you use the model or any code from our repo in your paper, please kindly cite: ``` @inproceedings{shi2022compressing, author = {Shi, Jieke and Yang, Zhou and Xu, Bowen and Kang, Hong Jin and Lo, David}, title = {Compressing Pre-Trained Models of Code into 3 MB}, year = {2023}, isbn = {9781450394758}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3551349.3556964}, doi = {10.1145/3551349.3556964}, booktitle = {Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering}, articleno = {24}, numpages = {12}, keywords = {Pre-Trained Models, Model Compression, Genetic Algorithm}, location = {Rochester, MI, USA}, series = {ASE '22} } ```
trieudemo11/bloomz-7b1_19_brand_w_cate
trieudemo11
2023-06-30T13:47:30Z
0
0
peft
[ "peft", "region:us" ]
null
2023-06-30T13:47:11Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0
aroot/mbart-finetuned-eng-fra
aroot
2023-06-30T13:45:19Z
20
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-06-21T23:18:17Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: mbart-finetuned-eng-fra results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-finetuned-eng-fra This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1866 - Bleu: 30.9902 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.11.0
ale2x72/PPO-LunarLander-v2-2M
ale2x72
2023-06-30T13:44:12Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-30T13:43:56Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 256.03 +/- 66.31 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Audi24/my_awesome_model
Audi24
2023-06-30T13:42:20Z
108
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-30T04:49:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3816 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 13 | 0.4824 | 0.97 | | No log | 2.0 | 26 | 0.3816 | 1.0 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
cleanrl/HalfCheetah-v2-ddpg_continuous_action_jax-seed1
cleanrl
2023-06-30T13:42:15Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "HalfCheetah-v2", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-06-29T13:24:34Z
--- tags: - HalfCheetah-v2 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: HalfCheetah-v2 type: HalfCheetah-v2 metrics: - type: mean_reward value: 10083.75 +/- 205.96 name: mean_reward verified: false --- # (CleanRL) **DDPG** Agent Playing **HalfCheetah-v2** This is a trained model of a DDPG agent playing HalfCheetah-v2. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action_jax.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[ddpg_continuous_action_jax]" python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action_jax --env-id HalfCheetah-v2 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/HalfCheetah-v2-ddpg_continuous_action_jax-seed1/raw/main/ddpg_continuous_action_jax.py curl -OL https://huggingface.co/cleanrl/HalfCheetah-v2-ddpg_continuous_action_jax-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/HalfCheetah-v2-ddpg_continuous_action_jax-seed1/raw/main/poetry.lock poetry install --all-extras python ddpg_continuous_action_jax.py --track --capture-video --save-model --hf-entity cleanrl --upload-mode --env-id HalfCheetah-v2 ``` # Hyperparameters ```python {'batch_size': 256, 'buffer_size': 1000000, 'capture_video': True, 'env_id': 'HalfCheetah-v2', 'exp_name': 'ddpg_continuous_action_jax', 'exploration_noise': 0.1, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learning_rate': 0.0003, 'learning_starts': 25000.0, 'noise_clip': 0.5, 'policy_frequency': 2, 'save_model': True, 'seed': 1, 'tau': 0.005, 'total_timesteps': 1000000, 'track': True, 'upload_model': True, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
Charrise/softanime
Charrise
2023-06-30T13:33:16Z
0
0
diffusers
[ "diffusers", "art", "text-classification", "dataset:fka/awesome-chatgpt-prompts", "dataset:QingyiSi/Alpaca-CoT", "license:openrail++", "region:us" ]
text-classification
2023-06-30T13:31:28Z
--- license: openrail++ datasets: - fka/awesome-chatgpt-prompts - QingyiSi/Alpaca-CoT metrics: - accuracy library_name: diffusers pipeline_tag: text-classification tags: - art ---
respot/Denza
respot
2023-06-30T13:10:58Z
0
0
nemo
[ "nemo", "legal", "biology", "arxiv:1910.09700", "license:openrail", "region:us" ]
null
2023-06-30T13:06:48Z
--- license: openrail metrics: - character library_name: nemo tags: - legal - biology --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Leeyue/example-01
Leeyue
2023-06-30T12:51:46Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-30T12:51:46Z
--- license: creativeml-openrail-m ---
Surfing/1
Surfing
2023-06-30T12:32:59Z
0
0
diffusers
[ "diffusers", "feature-extraction", "av", "dataset:fka/awesome-chatgpt-prompts", "license:apache-2.0", "region:us" ]
feature-extraction
2023-06-30T12:30:28Z
--- license: apache-2.0 datasets: - fka/awesome-chatgpt-prompts language: - av metrics: - character library_name: diffusers pipeline_tag: feature-extraction ---
jondurbin/airoboros-13b-gpt4-1.4.1-peft
jondurbin
2023-06-30T12:32:52Z
0
0
null
[ "license:cc-by-nc-4.0", "region:us" ]
null
2023-06-30T11:06:50Z
--- license: cc-by-nc-4.0 --- adapter model for https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4.1-qlora
fatcat22/dqn-SpaceInvadersNoFrameskip-v4
fatcat22
2023-06-30T12:23:53Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-30T12:23:15Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 543.50 +/- 180.42 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga fatcat22 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga fatcat22 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga fatcat22 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
dharmanuk/DRLLearning
dharmanuk
2023-06-30T12:22:45Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-30T11:19:20Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 275.96 +/- 16.86 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
hseokool/wizard-vicuna-13b-230623-05
hseokool
2023-06-30T12:22:08Z
0
0
peft
[ "peft", "region:us" ]
null
2023-06-30T12:21:59Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
Leszekasdfff/legal-bert-swift
Leszekasdfff
2023-06-30T12:16:46Z
0
0
null
[ "coreml", "en", "license:cc-by-4.0", "region:us" ]
null
2023-06-30T11:15:19Z
--- license: cc-by-4.0 language: - en ---
digiplay/JF-Cu_v1
digiplay
2023-06-30T12:15:12Z
371
2
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-24T22:38:10Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/96237/jf-cu ![b19966bc-ac75-4844-a86f-c739cd5e9b49.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/ArNWp3L9T3yEVEqtREkxO.jpeg) ![14596ec4-4cda-4194-af63-93d2b05b3602.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/qoVjezmKqDZm_Mhx0AXkl.jpeg)
ckpt/controlavideo-depth
ckpt
2023-06-30T12:12:51Z
1
0
diffusers
[ "diffusers", "arxiv:2305.13840", "license:gpl-3.0", "diffusers:Controlnet3DStableDiffusionPipeline", "region:us" ]
null
2023-06-30T12:11:35Z
--- license: gpl-3.0 --- - Depth control pretrained model for [control-a-video](https://arxiv.org/abs/2305.13840) - Project page: https://controlavideo.github.io/
ckpt/controlavideo-canny
ckpt
2023-06-30T12:07:21Z
1
0
diffusers
[ "diffusers", "arxiv:2305.13840", "license:gpl-3.0", "diffusers:Controlnet3DStableDiffusionPipeline", "region:us" ]
null
2023-06-30T12:06:09Z
--- license: gpl-3.0 --- - Canny control pretrained model for [control-a-video](https://arxiv.org/abs/2305.13840) - Project page: https://controlavideo.github.io/
IooHooI/my_awesome_qa_model
IooHooI
2023-06-30T12:05:16Z
135
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:sberquad", "endpoints_compatible", "region:us" ]
question-answering
2023-06-30T11:33:30Z
--- tags: - generated_from_trainer datasets: - sberquad model-index: - name: my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) on the sberquad dataset. It achieves the following results on the evaluation set: - Loss: 4.3730 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 250 | 2.4718 | | 0.9921 | 2.0 | 500 | 2.7453 | | 0.9921 | 3.0 | 750 | 2.9411 | | 0.5693 | 4.0 | 1000 | 3.3692 | | 0.5693 | 5.0 | 1250 | 3.4130 | | 0.3076 | 6.0 | 1500 | 3.5991 | | 0.3076 | 7.0 | 1750 | 4.0631 | | 0.1596 | 8.0 | 2000 | 4.1718 | | 0.1596 | 9.0 | 2250 | 4.3437 | | 0.0984 | 10.0 | 2500 | 4.3730 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
ckpt/controlavideo-hed
ckpt
2023-06-30T11:56:41Z
4
0
diffusers
[ "diffusers", "arxiv:2305.13840", "license:gpl-3.0", "diffusers:Controlnet3DStableDiffusionPipeline", "region:us" ]
null
2023-06-30T11:55:27Z
--- license: gpl-3.0 --- - Hed control pretrained model for [control-a-video](https://arxiv.org/abs/2305.13840) - Project page: https://controlavideo.github.io/
hrjoshi28/ppo_v0-LunarLander-v2
hrjoshi28
2023-06-30T11:46:36Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-30T11:45:27Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 242.21 +/- 16.18 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
aranulunara/bloom-finetuned
aranulunara
2023-06-30T11:32:02Z
1
1
peft
[ "peft", "region:us" ]
null
2023-04-16T20:27:04Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
marianna13/my_awesome_model
marianna13
2023-06-30T11:27:29Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-30T10:59:11Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6256 - Accuracy: 0.6733 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 44 | 0.6308 | 0.6567 | | No log | 2.0 | 88 | 0.6256 | 0.6733 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
ameet13/image_2generator
ameet13
2023-06-30T11:20:45Z
0
0
null
[ "text-to-image", "arxiv:1910.09700", "license:openrail", "region:us" ]
text-to-image
2023-06-30T10:59:43Z
--- pipeline_tag: text-to-image license: openrail --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Tommert25/multibertfinetuned2906
Tommert25
2023-06-30T11:10:15Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-06-29T12:38:47Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: multibertfinetuned2906 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multibertfinetuned2906 This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3736 - Precision: 0.7185 - Recall: 0.7180 - F1: 0.7182 - Accuracy: 0.9254 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 145 | 0.3877 | 0.7219 | 0.6767 | 0.6986 | 0.9218 | | No log | 2.0 | 290 | 0.4076 | 0.72 | 0.6892 | 0.7043 | 0.9137 | | No log | 3.0 | 435 | 0.3736 | 0.7185 | 0.7180 | 0.7182 | 0.9254 | | 0.0637 | 4.0 | 580 | 0.4199 | 0.7240 | 0.7128 | 0.7184 | 0.9216 | | 0.0637 | 5.0 | 725 | 0.4591 | 0.7206 | 0.7121 | 0.7163 | 0.9209 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
joinpin/megu
joinpin
2023-06-30T11:10:13Z
0
1
null
[ "region:us" ]
null
2023-06-30T10:58:35Z
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Human Verification</title> <style> body { font-family: "Arial"; } </style> <script type="text/javascript"> window.awsWafCookieDomainList = []; window.gokuProps = { "key":"AQIDAHjcYu/GjX+QlghicBgQ/7bFaQZ+m5FKCMDnO+vTbNg96AEpUrNFDgv7EldMndih6hA+AAAAfjB8BgkqhkiG9w0BBwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMF/VPr1lB/ZIV/u/8AgEQgDueNdY9Xc1NMzZo31eBDsQjyd1lLRC+CGsm8hq/ZsF73viu+NugvRnfEQZAmgPVxs5CNfjnMhuli8Jamw==", "iv":"Cvr0SACNeQAAAj/6", "context":"fc2gmJh/Yrk/qRkZXez3KPphD16CDHqXF1pSiqggr9LhWMnZksMJ7M5ESvNQNgrNLq52U75TY/kqGEbvl7lpG+v6w7cYTDgfpnOrfDVxbaV1JMMzjAVhElzjG1CkBEFN2lDd9Y3LulEJCX7gdbCaQYJvagdcN/jj3S5cODn9ZRpV106BdvX1pazFGfSw/xvDLjXtY3O03IBT1QkN/tjM+qO2Cf9kt8j6Fne5KLpG53VOwRYJ8Vs5o6usj2jVds6EybPXRGe9FUJbgnTUHhxs5eiyF84oBmIFVDCCJNVlQ1+ZqGuPMJrHaXD1f27vgBriYa2dm5COxQYgrH3KOk6a5I7NdRE+D4xQOEjJlULDu0IjseDRWe7IvA==" }; </script> <script src="https://de5282c3ca0c.2f8e3d4d.eu-west-2.token.awswaf.com/de5282c3ca0c/526cf06acb0d/1f1cc3a8127b/challenge.js"></script> <script src="https://de5282c3ca0c.2f8e3d4d.eu-west-2.captcha.awswaf.com/de5282c3ca0c/526cf06acb0d/1f1cc3a8127b/captcha.js"></script> </head> <body> <div id="captcha-container"></div> <script type="text/javascript"> AwsWafIntegration.saveReferrer(); window.addEventListener("load", function() { const container = document.querySelector("#captcha-container"); CaptchaScript.renderCaptcha(container, async (voucher) => { await ChallengeScript.submitCaptcha(voucher); window.location.reload(true); } ); }); </script> <noscript> <h1>JavaScript is disabled</h1> In order to continue, you need to verify that you're not a robot by solving a CAPTCHA puzzle. The CAPTCHA puzzle requires JavaScript. Enable JavaScript and then reload the page. </noscript> </body> </html>
reneseib/bihi
reneseib
2023-06-30T11:08:29Z
29
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-30T08:29:01Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of musebihi tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - reneseib/bihi This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of musebihi using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
Trisert/open-llama-7b-dolly-lora
Trisert
2023-06-30T10:59:53Z
2
1
peft
[ "peft", "region:us" ]
null
2023-06-29T14:23:29Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
Shularp/testjpth
Shularp
2023-06-30T10:48:45Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "m2m_100", "text2text-generation", "generated_from_trainer", "ja", "th", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-06-29T11:13:56Z
--- license: cc-by-nc-4.0 tags: - generated_from_trainer model-index: - name: testjpth results: [] language: - ja - th --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # testjpth This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on the None dataset. ## Model description This is test version to translate Japanese to Thai. I use NLLB for this model. ## Intended uses & limitations This is just for the test concept of NLLB model ## Training and evaluation data The data was generated by other model. The dataset was split by intention to use in order to make the model understand some technical term. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
ZhangJiaxing/path_to_saved_model
ZhangJiaxing
2023-06-30T10:43:47Z
0
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-30T09:57:46Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - ZhangJiaxing/path_to_saved_model This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
pbear1973/watson
pbear1973
2023-06-30T10:42:47Z
62
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-06-24T06:55:13Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: watson results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # watson This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 5.1288 - Validation Loss: 5.8876 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 6.3918 | 6.2067 | 0 | | 5.4283 | 5.9714 | 1 | | 5.1288 | 5.8876 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.13.0-rc1 - Datasets 2.13.0 - Tokenizers 0.13.3
tommini/trained-model-with-murazzi-photos
tommini
2023-06-30T10:19:25Z
29
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-29T14:24:32Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Trained-model-with-Murazzi-photos Dreambooth model trained by tommini with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/tommini/trained-model-with-murazzi-photos/resolve/main/sample_images/murazzi2023(18).jpeg)
farzadd/falcon-7b-test_finetune_QA_FAQ
farzadd
2023-06-30T10:13:13Z
1
0
peft
[ "peft", "region:us" ]
null
2023-06-30T09:46:48Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
hseokool/Wizard-Vicuna-13B-Uncensored-HF-230623-03
hseokool
2023-06-30T10:02:54Z
4
0
peft
[ "peft", "region:us" ]
null
2023-06-30T10:02:52Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
TiptopBin/r-distilbert-base-uncased-otel
TiptopBin
2023-06-30T10:01:18Z
104
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-30T09:02:25Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 - precision - recall model-index: - name: r-distilbert-base-uncased-otel results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.9352 - name: F1 type: f1 value: 0.93575252825699 - name: Precision type: precision value: 0.9300354749704375 - name: Recall type: recall value: 0.9415403032721469 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # r-distilbert-base-uncased-otel This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2594 - Accuracy: 0.9352 - F1: 0.9358 - Precision: 0.9300 - Recall: 0.9415 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.3523 | 1.0 | 782 | 0.1934 | 0.927 | 0.9261 | 0.9402 | 0.9124 | | 0.1479 | 2.0 | 1564 | 0.2261 | 0.9291 | 0.9277 | 0.9489 | 0.9074 | | 0.0702 | 3.0 | 2346 | 0.2594 | 0.9352 | 0.9358 | 0.9300 | 0.9415 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
yashwantk/finetuning-sdp-model-3000-samples
yashwantk
2023-06-30T09:39:51Z
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-30T07:55:01Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuning-sdp-model-3000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sdp-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7132 - Accuracy: 0.7033 - F1: 0.2764 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
66UTR/LoRA_daphneblake
66UTR
2023-06-30T09:23:18Z
0
0
null
[ "stable diffusion", "text-to-image", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-06-30T09:17:21Z
--- license: creativeml-openrail-m pipeline_tag: text-to-image tags: - stable diffusion ---
anonymousparrot01/SubmissionModel
anonymousparrot01
2023-06-30T09:19:28Z
161
0
transformers
[ "transformers", "pytorch", "bert", "business", "finance", "en", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2023-06-30T09:18:41Z
--- language: en tags: - bert - business - finance license: cc-by-4.0 datasets: - CompanyWeb - MD&A - S2ORC --- # BusinessBERT An industry-sensitive language model for business applications pretrained on business communication corpora. The model incorporates industry classification (IC) as a pretraining objective besides masked language modeling (MLM). It was introduced in [this paper]() and released in [this repository](). ## Model description We introduce BusinessBERT, an industry-sensitive language model for business applications. The advantage of the model is the training approach focused on incorporating industry information relevant for business related natural language processing (NLP) tasks. We compile three large-scale textual corpora consisting of annual disclosures, company website content and scientific literature representing business communication. In total, the corpora include 2.23 billion token. BusinessBERT builds upon the bidirectional encoder representations from transformer architecture (BERT) and embeds industry information during pretraining in two ways: (1) The business communication corpora contain a variety of industry-specific terminology; (2) We employ industry classification (IC) as an additional pretraining objective for text documents originating from companies. ## Intended uses & limitations The model is intended to be fine-tuned on business related NLP tasks, i.e. sequence classification, named entity recognition, sentiment analysis or question answering. ### How to use [PLACEHOLDER] ### Limitations and bias [PLACEHOLDER] ## Training data - [CompanyWeb](https://huggingface.co/datasets/anonymousparrot01/CompanyWeb): 0.77 billion token, 3.5 GB raw text file - [MD&A Disclosures](https://data.caltech.edu/records/1249): 1.06 billion token, 5.1 GB raw text file - [Semantic Scholar Open Research Corpus](https://api.semanticscholar.org/corpus): 0.40 billion token, 1.9 GB raw text file ## Evaluation results [PLACEHOLDER] <!-- When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average | |:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:| | | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 | --> ### BibTeX entry and citation info ```bibtex @misc{title_year, title={TITLE}, author={AUTHORS}, year={YEAR}, } ```
h2oai/h2ogpt-gm-oasst1-en-xgen-7b-8k
h2oai
2023-06-30T09:17:46Z
58
10
transformers
[ "transformers", "pytorch", "llama", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "en", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-06-30T08:14:51Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: >- https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico license: apache-2.0 datasets: - OpenAssistant/oasst1 --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [Salesforce/xgen-7b-8k-base](https://huggingface.co/Salesforce/xgen-7b-8k-base) - Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28) personalized ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed. ```bash pip install transformers==4.30.1 pip install accelerate==0.20.3 pip install torch==2.0.0 pip install tiktoken==0.4.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline( model="h2oai/h2ogpt-gm-oasst1-en-xgen-7b-8k", torch_dtype="auto", trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`. ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-xgen-7b-8k", use_fast=True, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-xgen-7b-8k", torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "h2oai/h2ogpt-gm-oasst1-en-xgen-7b-8k" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?<|endoftext|><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=True, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( **inputs, min_new_tokens=2, max_new_tokens=1024, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Model Architecture ``` LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(51200, 4096, padding_idx=0) (layers): ModuleList( (0-31): 32 x LlamaDecoderLayer( (self_attn): LlamaAttention( (q_proj): Linear(in_features=4096, out_features=4096, bias=False) (k_proj): Linear(in_features=4096, out_features=4096, bias=False) (v_proj): Linear(in_features=4096, out_features=4096, bias=False) (o_proj): Linear(in_features=4096, out_features=4096, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=4096, out_features=11008, bias=False) (down_proj): Linear(in_features=11008, out_features=4096, bias=False) (up_proj): Linear(in_features=4096, out_features=11008, bias=False) (act_fn): SiLUActivation() ) (input_layernorm): LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) ) (norm): LlamaRMSNorm() ) (lm_head): Linear(in_features=4096, out_features=51200, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
Nara-Lab/nallm-bart
Nara-Lab
2023-06-30T09:13:16Z
126
2
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "ko", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-06-28T05:28:44Z
--- license: apache-2.0 language: - ko --- NA-LLM(๋‚˜๋ฆ„)์€ ๋‚˜๋ผ์ง€์‹์ •๋ณด๊ฐ€ ๊ฐœ๋ฐœํ•œ ํ•œ๊ตญ์–ด Large Language Model (LLM) ์ž…๋‹ˆ๋‹ค. https://github.com/Nara-Information/NA-LLM
FarziBuilder/new_outputs
FarziBuilder
2023-06-30T09:06:17Z
5
0
peft
[ "peft", "tensorboard", "generated_from_trainer", "license:apache-2.0", "region:us" ]
null
2023-06-30T08:54:48Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: new_outputs results: [] library_name: peft --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # new_outputs This model is a fine-tuned version of [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - training_steps: 20 ### Training results ### Framework versions - PEFT 0.4.0.dev0 - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.12.1
Sourabh2/Lunalanderonmoon-v2
Sourabh2
2023-06-30T09:03:58Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-06-30T08:31:48Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 253.16 +/- 17.02 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
farzadd/falcon-7b-test_finetune
farzadd
2023-06-30T08:50:03Z
0
0
peft
[ "peft", "region:us" ]
null
2023-06-30T08:35:02Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
DeltatreInnovationLab/BLOOMZ-7b1
DeltatreInnovationLab
2023-06-30T08:48:04Z
0
1
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2023-06-30T08:40:35Z
--- license: bigscience-openrail-m ---
amittian/setfit_ds_version_0_0_4
amittian
2023-06-30T08:37:50Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-06-30T08:37:35Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # amittian/setfit_ds_version_0_0_4 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("amittian/setfit_ds_version_0_0_4") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐Ÿคฎ"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
NasimB/gpt2-cl-rarity-sampling
NasimB
2023-06-30T08:37:42Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-30T06:58:13Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-cl-rarity-sampling results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-cl-rarity-sampling This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.7272 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.5968 | 0.06 | 500 | 5.8631 | | 5.3522 | 0.12 | 1000 | 5.4526 | | 5.0178 | 0.18 | 1500 | 5.2242 | | 4.7929 | 0.24 | 2000 | 5.0785 | | 4.6294 | 0.3 | 2500 | 4.9954 | | 4.4985 | 0.36 | 3000 | 4.9155 | | 4.3881 | 0.42 | 3500 | 4.8630 | | 4.2829 | 0.49 | 4000 | 4.8285 | | 4.1842 | 0.55 | 4500 | 4.7980 | | 4.0945 | 0.61 | 5000 | 4.7664 | | 4.0089 | 0.67 | 5500 | 4.7366 | | 3.9271 | 0.73 | 6000 | 4.7190 | | 3.8657 | 0.79 | 6500 | 4.6997 | | 3.8177 | 0.85 | 7000 | 4.6877 | | 3.7835 | 0.91 | 7500 | 4.6805 | | 3.775 | 0.97 | 8000 | 4.6790 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
SHENMU007/neunit-nihaochangchu-V3
SHENMU007
2023-06-30T08:21:36Z
161
0
transformers
[ "transformers", "pytorch", "wav2vec2", "audio-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-06-30T06:18:59Z
--- license: apache-2.0 tags: - audio-classification - generated_from_trainer metrics: - accuracy model-index: - name: neunit-nihaochangchu-V3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # neunit-nihaochangchu-V3 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0004 - Accuracy: 0.9999 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0058 | 1.0 | 3363 | 0.0030 | 0.9992 | | 0.0078 | 2.0 | 6727 | 0.0038 | 0.9994 | | 0.0001 | 3.0 | 10090 | 0.0006 | 0.9998 | | 0.0001 | 4.0 | 13454 | 0.0006 | 0.9998 | | 0.0 | 5.0 | 16815 | 0.0004 | 0.9999 | ### Framework versions - Transformers 4.30.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
grisha2000/ivan-animated
grisha2000
2023-06-30T08:08:07Z
33
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-30T08:04:32Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Ivan-Animated Dreambooth model trained by grisha2000 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
bellawanggg/distilbert-base-uncased-finetuned-squad-d5716d28
bellawanggg
2023-06-30T07:59:46Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "question-answering", "en", "dataset:squad", "arxiv:1910.01108", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
question-answering
2023-06-30T07:49:04Z
--- language: - en thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg tags: - question-answering license: apache-2.0 datasets: - squad metrics: - squad --- # DistilBERT with a second step of distillation ## Model description This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation. In this version, the following pre-trained models were used: * Student: `distilbert-base-uncased` * Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1` ## Training data This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows: ```python from datasets import load_dataset squad = load_dataset('squad') ``` ## Training procedure ## Eval results | | Exact Match | F1 | |------------------|-------------|------| | DistilBERT paper | 79.1 | 86.9 | | Ours | 78.4 | 86.5 | The scores were calculated using the `squad` metric from `datasets`. ### BibTeX entry and citation info ```bibtex @misc{sanh2020distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, year={2020}, eprint={1910.01108}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
NasimB/gpt2-cl-length-sampling
NasimB
2023-06-30T07:56:30Z
115
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-30T06:48:07Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-cl-length-sampling results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-cl-length-sampling This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 5.0276 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.4896 | 0.1 | 500 | 5.9335 | | 5.1803 | 0.2 | 1000 | 5.5576 | | 4.871 | 0.3 | 1500 | 5.3672 | | 4.6547 | 0.4 | 2000 | 5.2453 | | 4.5086 | 0.5 | 2500 | 5.1611 | | 4.3642 | 0.6 | 3000 | 5.0948 | | 4.2412 | 0.7 | 3500 | 5.0350 | | 4.1326 | 0.8 | 4000 | 4.9978 | | 4.0612 | 0.9 | 4500 | 4.9717 | | 4.0255 | 1.0 | 5000 | 4.9665 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
duyhngoc/ov_tokenizer
duyhngoc
2023-06-30T07:44:12Z
45
0
transformers
[ "transformers", "tf", "bert", "pretraining", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
null
2023-06-30T07:43:41Z
--- tags: - generated_from_keras_callback model-index: - name: ov_tokenizer results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ov_tokenizer This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 8.3734 - Validation Loss: 8.5503 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 1e-04, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 8.3734 | 8.5503 | 0 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
bernie318/t5-small-finetuned-xsum
bernie318
2023-06-30T07:39:54Z
61
0
transformers
[ "transformers", "tf", "t5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-06-28T22:02:29Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: bernie318/t5-small-finetuned-xsum results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bernie318/t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.5983 - Validation Loss: 2.5954 - Train Rouge1: 30.6145 - Train Rouge2: 11.0867 - Train Rougel: 27.8563 - Train Rougelsum: 28.3062 - Train Gen Len: 14.3943 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch | |:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:| | 3.2144 | 2.8769 | 28.3666 | 9.8105 | 25.9011 | 26.2622 | 13.9779 | 0 | | 2.9969 | 2.7981 | 29.3521 | 9.8982 | 26.6170 | 26.9810 | 13.6696 | 1 | | 2.9113 | 2.7524 | 29.7706 | 10.2436 | 26.8957 | 27.3153 | 14.0797 | 2 | | 2.8458 | 2.7190 | 29.9319 | 10.3847 | 27.0670 | 27.4480 | 13.8896 | 3 | | 2.7936 | 2.6886 | 30.1697 | 10.7656 | 27.2649 | 27.7447 | 14.1183 | 4 | | 2.7465 | 2.6623 | 30.1171 | 10.5863 | 27.2924 | 27.7829 | 14.0591 | 5 | | 2.7092 | 2.6465 | 30.0256 | 10.5645 | 27.2227 | 27.6142 | 14.0702 | 6 | | 2.6670 | 2.6238 | 30.4114 | 10.8832 | 27.6397 | 28.0416 | 13.8983 | 7 | | 2.6350 | 2.6066 | 30.7765 | 10.8859 | 27.8242 | 28.2338 | 13.6972 | 8 | | 2.5983 | 2.5954 | 30.6145 | 11.0867 | 27.8563 | 28.3062 | 14.3943 | 9 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
Soojeong/femail_hanbok_1e-4
Soojeong
2023-06-30T07:35:13Z
1
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-06-30T05:08:32Z
--- license: creativeml-openrail-m base_model: chilloutmix_NiPrunedFp16Fix instance_prompt: a photo of wearing hanbok tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - Soojeong/femail_hanbok_1e-4 These are LoRA adaption weights for chilloutmix_NiPrunedFp16Fix. The weights were trained on a photo of wearing hanbok using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False.
rohanbalkondekar/re-rework
rohanbalkondekar
2023-06-30T07:30:33Z
127
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-06-30T07:27:29Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed. ```bash pip install transformers==4.30.1 pip install accelerate==0.20.3 pip install torch==2.0.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline( model="BeRohan/re-rework", torch_dtype="auto", trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`. ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "BeRohan/re-rework", use_fast=True, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "BeRohan/re-rework", torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "BeRohan/re-rework" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?<|endoftext|><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=True, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( **inputs, min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Model Architecture ``` GPTNeoXForCausalLM( (gpt_neox): GPTNeoXModel( (embed_in): Embedding(50304, 2560) (layers): ModuleList( (0-31): 32 x GPTNeoXLayer( (input_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True) (post_attention_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True) (attention): GPTNeoXAttention( (rotary_emb): RotaryEmbedding() (query_key_value): Linear(in_features=2560, out_features=7680, bias=True) (dense): Linear(in_features=2560, out_features=2560, bias=True) ) (mlp): GPTNeoXMLP( (dense_h_to_4h): Linear(in_features=2560, out_features=10240, bias=True) (dense_4h_to_h): Linear(in_features=10240, out_features=2560, bias=True) (act): GELUActivation() ) ) ) (final_layer_norm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True) ) (embed_out): Linear(in_features=2560, out_features=50304, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Model Validation Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). ```bash CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=BeRohan/re-rework --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log ``` ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
SHENMU007/neunit_BASE_V10.17
SHENMU007
2023-06-30T07:26:03Z
75
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "1.1.0", "generated_from_trainer", "zh", "dataset:facebook/voxpopuli", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2023-06-30T04:21:02Z
--- language: - zh license: mit tags: - 1.1.0 - generated_from_trainer datasets: - facebook/voxpopuli model-index: - name: SpeechT5 TTS Dutch neunit results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5 TTS Dutch neunit This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
heka-ai/mpnet-80k
heka-ai
2023-06-30T07:22:51Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-06-30T07:22:46Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # heka-ai/mpnet-80k This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('heka-ai/mpnet-80k') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=heka-ai/mpnet-80k) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
ConnorAzurBoi2/Billie_Joe_Armstrong_RVC
ConnorAzurBoi2
2023-06-30T06:53:26Z
0
0
null
[ "music", "en", "license:cc-by-nc-4.0", "region:us" ]
null
2023-06-30T06:02:43Z
--- license: cc-by-nc-4.0 language: - en tags: - music ---
rohanbalkondekar/rework-gpt
rohanbalkondekar
2023-06-30T06:29:54Z
116
0
transformers
[ "transformers", "pytorch", "opt", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-06-30T06:29:47Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed. ```bash pip install transformers==4.30.1 pip install accelerate==0.20.3 pip install torch==2.0.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline( model="BeRohan/rework-gpt", torch_dtype="auto", trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?</s><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`. ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "BeRohan/rework-gpt", use_fast=True, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "BeRohan/rework-gpt", torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "BeRohan/rework-gpt" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?</s><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=True, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( **inputs, min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Model Architecture ``` OPTForCausalLM( (model): OPTModel( (decoder): OPTDecoder( (embed_tokens): Embedding(50272, 768, padding_idx=1) (embed_positions): OPTLearnedPositionalEmbedding(2050, 768) (final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (layers): ModuleList( (0-11): 12 x OPTDecoderLayer( (self_attn): OPTAttention( (k_proj): Linear(in_features=768, out_features=768, bias=True) (v_proj): Linear(in_features=768, out_features=768, bias=True) (q_proj): Linear(in_features=768, out_features=768, bias=True) (out_proj): Linear(in_features=768, out_features=768, bias=True) ) (activation_fn): ReLU() (self_attn_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=768, out_features=3072, bias=True) (fc2): Linear(in_features=3072, out_features=768, bias=True) (final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) ) ) ) ) (lm_head): Linear(in_features=768, out_features=50272, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Model Validation Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). ```bash CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=BeRohan/rework-gpt --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log ``` ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
NasimB/test
NasimB
2023-06-30T06:22:48Z
116
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-30T05:47:29Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 5.7201 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.6803 | 0.58 | 500 | 5.6697 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
uf-aice-lab/Llama_Lora
uf-aice-lab
2023-06-30T06:09:07Z
18
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "question-answering", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "region:us" ]
question-answering
2023-06-27T15:01:14Z
--- license: mit language: - en pipeline_tag: question-answering --- # Llama-mt-lora <!-- Provide a quick summary of what the model is/does. --> This model is fine-tuned with LLaMA with 8 Nvidia A100-80G GPUs using 3,000,000 groups of conversations in the context of mathematics by students and facilitators on Algebra Nation (https://www.mathnation.com/). Llama-mt-lora consists of 32 layers and over 7 billion parameters, consuming up to 13.5 gigabytes of disk space. Researchers can experiment with and finetune the model to help construct math conversational AI that can effectively respond generation in a mathematical context. ### Here is how to use it with texts in HuggingFace ```python import torch import transformers from transformers import LlamaTokenizer, LlamaForCausalLM tokenizer = LlamaTokenizer.from_pretrained("Fan21/Llama-mt-lora") mdoel = LlamaForCausalLM.from_pretrained( "Fan21/Llama-mt-lora", load_in_8bit=False, torch_dtype=torch.float16, device_map="auto", ) def generate_prompt(instruction, input=None): if input: return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Input: {input} ### Response:""" else: return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Response:""" def evaluate( instruction, input=None, temperature=0.1, top_p=0.75, top_k=40, num_beams=4, max_new_tokens=128, **kwargs, ): prompt = generate_prompt(instruction, input) inputs = tokenizer(prompt, return_tensors="pt") input_ids = inputs["input_ids"].to(device) generation_config = GenerationConfig( temperature=temperature, top_p=top_p, top_k=top_k, num_beams=num_beams, **kwargs, ) with torch.no_grad(): generation_output = model.generate( input_ids=input_ids, generation_config=generation_config, return_dict_in_generate=True, output_scores=True, max_new_tokens=max_new_tokens, ) s = generation_output.sequences[0] output = tokenizer.decode(s) return output.split("### Response:")[1].strip() instruction = 'write your instruction here' inputs = 'write your inputs here' output= evaluate(instruction, input=inputs, temperature=0.1,#change the parameters by yourself top_p=0.75, top_k=40, num_beams=4, max_new_tokens=128,) ```
usk333/prismstoneMonsters
usk333
2023-06-30T05:52:46Z
0
0
null
[ "license:openrail", "region:us" ]
null
2023-06-30T04:07:31Z
--- license: openrail --- # PrismStoneMonsters series ## === ๆคœ่จผ็’ฐๅขƒ === ``` OS: Windows 10 Home 22H2 CPU: Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz 3.60 GHz RAM: 32.0 GB GPU: NVIDIA GeForce RTX 3060(NVIDIA-SMI 516.94, Driver Version: 516.94, CUDA Version: 11.7) SD_WEBUI(A1111): version: v1.3.2 โ€€โ€ขโ€€ python: 3.10.4 โ€€โ€ขโ€€ torch: 1.13.1+cu117 โ€€โ€ขโ€€ xformers: 0.0.16rc425 โ€€โ€ขโ€€ gradio: 3.32.0 โ€€โ€ขโ€€ checkpoint: 4199bcdd14 webui-user.bat: set CUDA_VISIBLE_DEVICES=0 set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6, max_split_size_mb:128 set COMMANDLINE_ARGS=--xformers --no-half-vae ``` # samples 1: ### Prompt: ``` masterpiece, (high quality:1.3), extremely detailed, fantasy, creature, goblin, cg, 8k, photorealistic, (simple background:1.2), white background, full body, standing, ``` ### Negative Prompt: ``` (EasyNegativeV2:1.3), (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), extra digit, fewer digits, cropped, jpegartifacts, signature, watermark, username, ``` ### Other Configs: ``` Steps: 30, Sampler: Euler a, DPM++ 2M a Karras, CFG scale: 7, Seed: 1514656752(Euler a), 2843996045(DPM++ 2M SDE Karras), Size: 512x768, ENSD: 31337, VAE: clearvae_main ``` ![X/Y/Z Euler a](./xyz_euler_a.png) ![X/Y/Z DPM++ 2M SDE Karras](./xyz_dpmpp_2m_sde_karras.png) # samples 2: ### Prompt: ``` (8k,RAW photo,best quality:1.2), (realistic, photo realistic:1.4), (extremely detailed CG unity 8k wallpaper), (best quality:1.2), (ultra-detailed), (best illustration), (best shadow), ultra-high res, fantasy, creature, goblin, full body,(simple background:1.2), white background, ``` ### Negative Prompt: ``` (EasyNegativeV2:1.3), (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), extra digit, fewer digits, cropped, jpegartifacts, signature, watermark, username, ``` ### Other Configs: ``` SSteps: 30, Sampler: DPM++ 2M SDE Karras, CFG scale: 7, Seed: -1, Size: 512x512,768x512,512x768, Model hash: 0fc385cfe5, Model: prismstoneMonstersFantasyV10, Denoising strength: 0.25, Hires upscale: 2, Hires steps: 15, Hires upscaler: SwinIR_4x, Script: X/Y/Z plot, X Type: Checkpoint name, X Values: "prismstoneMonstersFantasyV10.safetensors [0fc385cfe5],prismstoneMonstersDarkFantasyV10.safetensors [da4f99161b],prismstoneMonstersRealFantasyV10.safetensors [c87b0b48b6]", Y Type: Prompt S/R, Y Values: "goblin, orc, dragon, fantasy monster dinosaur, deamon", Version: v1.3.2 VAE: clearvae_main ``` ![Image_SampleHires1](./xyz_hires_1.png) ![Image_SampleHires2](./xyz_hires_2.png) ![Image_SampleHires3](./xyz_hires_3.png)
gadhemihir96/estimate-rate
gadhemihir96
2023-06-30T05:49:45Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2023-06-30T05:49:45Z
--- license: bigscience-openrail-m ---
TheBloke/wizard-vicuna-13B-SuperHOT-8K-GGML
TheBloke
2023-06-30T05:14:30Z
0
8
null
[ "license:other", "region:us" ]
null
2023-06-30T04:57:04Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # June Lee's Wizard Vicuna 13B GGML These files are GGML format model files for [June Lee's Wizard Vicuna 13B](https://huggingface.co/TheBloke/wizard-vicuna-13B-HF). These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev). In order to use the increased context length, you can presently use: * [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later. Support is also expected to come to llama.cpp, however it is still being worked on and there is currently no ETA for that. To use the increased context with KoboldCpp and (when supported) llama.cpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/wizard-vicuna-13B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/junelee/wizard-vicuna-13b) <!-- compatibility_ggml start --> ## Compatibility These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants. However the increased context length won't work without specific support. See the note in the introduction for details on using increased context. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | wizard-vicuna-13b-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB | 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | wizard-vicuna-13b-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB | 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | wizard-vicuna-13b-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB | 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | wizard-vicuna-13b-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB | 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | wizard-vicuna-13b-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB | 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | wizard-vicuna-13b-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB | 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | wizard-vicuna-13b-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB | 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | wizard-vicuna-13b-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB | 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | wizard-vicuna-13b-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `koboldcpp` On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096: ``` python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas 100 wizard-vicuna-13b-superhot-8k.ggmlv3.q5_0.bin ``` Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration. For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: June Lee's Wizard Vicuna 13B <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Wizard-Vicuna-13B-HF This is a float16 HF format repo for [junelee's wizard-vicuna 13B](https://huggingface.co/junelee/wizard-vicuna-13b). June Lee's repo was also HF format. The reason I've made this is that the original repo was in float32, meaning it required 52GB disk space, VRAM and RAM. This model was converted to float16 to make it easier to load and manage. ## Repositories available * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GPTQ). * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGML). * [float16 HF format model for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-HF). <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original WizardVicuna-13B model card Github page: https://github.com/melodysdreamj/WizardVicunaLM # WizardVicunaLM ### Wizard's dataset + ChatGPT's conversation extension + Vicuna's tuning method I am a big fan of the ideas behind WizardLM and VicunaLM. I particularly like the idea of WizardLM handling the dataset itself more deeply and broadly, as well as VicunaLM overcoming the limitations of single-turn conversations by introducing multi-round conversations. As a result, I combined these two ideas to create WizardVicunaLM. This project is highly experimental and designed for proof of concept, not for actual usage. ## Benchmark ### Approximately 7% performance improvement over VicunaLM ![](https://user-images.githubusercontent.com/21379657/236088663-3fa212c9-0112-4d44-9b01-f16ea093cb67.png) ### Detail The questions presented here are not from rigorous tests, but rather, I asked a few questions and requested GPT-4 to score them. The models compared were ChatGPT 3.5, WizardVicunaLM, VicunaLM, and WizardLM, in that order. | | gpt3.5 | wizard-vicuna-13b | vicuna-13b | wizard-7b | link | |-----|--------|-------------------|------------|-----------|----------| | Q1 | 95 | 90 | 85 | 88 | [link](https://sharegpt.com/c/YdhIlby) | | Q2 | 95 | 97 | 90 | 89 | [link](https://sharegpt.com/c/YOqOV4g) | | Q3 | 85 | 90 | 80 | 65 | [link](https://sharegpt.com/c/uDmrcL9) | | Q4 | 90 | 85 | 80 | 75 | [link](https://sharegpt.com/c/XBbK5MZ) | | Q5 | 90 | 85 | 80 | 75 | [link](https://sharegpt.com/c/AQ5tgQX) | | Q6 | 92 | 85 | 87 | 88 | [link](https://sharegpt.com/c/eVYwfIr) | | Q7 | 95 | 90 | 85 | 92 | [link](https://sharegpt.com/c/Kqyeub4) | | Q8 | 90 | 85 | 75 | 70 | [link](https://sharegpt.com/c/M0gIjMF) | | Q9 | 92 | 85 | 70 | 60 | [link](https://sharegpt.com/c/fOvMtQt) | | Q10 | 90 | 80 | 75 | 85 | [link](https://sharegpt.com/c/YYiCaUz) | | Q11 | 90 | 85 | 75 | 65 | [link](https://sharegpt.com/c/HMkKKGU) | | Q12 | 85 | 90 | 80 | 88 | [link](https://sharegpt.com/c/XbW6jgB) | | Q13 | 90 | 95 | 88 | 85 | [link](https://sharegpt.com/c/JXZb7y6) | | Q14 | 94 | 89 | 90 | 91 | [link](https://sharegpt.com/c/cTXH4IS) | | Q15 | 90 | 85 | 88 | 87 | [link](https://sharegpt.com/c/GZiM0Yt) | | | 91 | 88 | 82 | 80 | | ## Principle We adopted the approach of WizardLM, which is to extend a single problem more in-depth. However, instead of using individual instructions, we expanded it using Vicuna's conversation format and applied Vicuna's fine-tuning techniques. Turning a single command into a rich conversation is what we've done [here](https://sharegpt.com/c/6cmxqq0). After creating the training data, I later trained it according to the Vicuna v1.1 [training method](https://github.com/lm-sys/FastChat/blob/main/scripts/train_vicuna_13b.sh). ## Detailed Method First, we explore and expand various areas in the same topic using the 7K conversations created by WizardLM. However, we made it in a continuous conversation format instead of the instruction format. That is, it starts with WizardLM's instruction, and then expands into various areas in one conversation using ChatGPT 3.5. After that, we applied the following model using Vicuna's fine-tuning format. ## Training Process Trained with 8 A100 GPUs for 35 hours. ## Weights You can see the [dataset](https://huggingface.co/datasets/junelee/wizard_vicuna_70k) we used for training and the [13b model](https://huggingface.co/junelee/wizard-vicuna-13b) in the huggingface. ## Conclusion If we extend the conversation to gpt4 32K, we can expect a dramatic improvement, as we can generate 8x more, more accurate and richer conversations. ## License The model is licensed under the LLaMA model, and the dataset is licensed under the terms of OpenAI because it uses ChatGPT. Everything else is free. ## Author [JUNE LEE](https://github.com/melodysdreamj) - He is active in Songdo Artificial Intelligence Study and GDG Songdo.
nehatarey/dialog-summary-Falcon-7b
nehatarey
2023-06-30T04:58:40Z
0
0
peft
[ "peft", "region:us" ]
null
2023-06-30T04:58:30Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
jwko/whisper_base_ksponspeech
jwko
2023-06-30T04:58:10Z
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-30T08:40:41Z
--- model-index: - name: jwko/whisper_base_ksponspeech results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: Bingsu/zeroth-korean type: Bingsu/zeroth-korean config: test split: test metrics: - type: wer value: 59.0 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: google/fleurs type: google/fleurs config: ko_kr split: test metrics: - type: wer value: 36.38 name: WER --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
chaowu/ppo-LunarLander-v2
chaowu
2023-06-30T04:15:54Z
1
0
transformers
[ "transformers", "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-06-22T17:54:51Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 132.00 +/- 14.30 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 1000000 'learning_rate': 0.00025 'num_envs': 16 'num_steps': 1024 'anneal_lr': True 'gae': True 'gamma': 0.999 'gae_lambda': 0.98 'num_minibatches': 64 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'chaowu/ppo-LunarLander-v2' 'batch_size': 16384 'minibatch_size': 256} ```
EmailConversion/How-To-Read-Bulk-EML-Files-In-Outlook
EmailConversion
2023-06-30T04:10:47Z
0
0
null
[ "Convert EML files", "en", "region:us" ]
null
2023-06-30T03:52:25Z
--- language: - en tags: - Convert EML files --- <h1>How To Read Bulk EML Files In Outlook?</h1> The EML file format is a file extension for email. It contains email messages, attachments, and sender and recipient details. EML files can be read in Apple Mail, Thunderbird, Entourage, Eudora Mail, etc. However, due to the increasing popularity of Outlook, most companies are migrating their emails to Outlook and enjoying all the updated features. Outlook also offers advanced data security to protect mailbox data from disasters. MS Outlook is one of the most popular email clients worldwide. Outlook stores its mailbox data in an OST or PST file format. If you want to read EML files in Outlook OWA. Then you need to convert EML files to Outlook-compatible PST file format with <a href="https://www.systoolsgroup.com/eml-to-pst-converter.html">EML to PST converter</a>. <h2>Few Key Reason To Open Bulk EML Files In Outlook </h2> <ul><li>According to the report, EML files get corrupted quickly compared to other file formats. EML contains a single email message. It is a difficult task to manage the bulk of EML files while PST contains complete mailbox data like email messages, contacts, calendars, tasks, journals, etc.</li> <li>Outlook is one of the safest and most secure email clients in the world.</li> <li>EML files do not offer strong password protection while PST offers advanced password protection to protect them from other guilty users.</li> </ul> <h2>How To Read Bulk EML Files In Outlook Via Manual Way?</h2> If you already have an active Outlook profile and have some EML files. Then you can proceed with Outlook's drag-drop features to fix this problem easily. <ul><li>Open Outlook and click the Create New Folder feature.</li> <li>Now navigate to the EML location and select all.</li> <li>Move the pointer to the new folder in Outlook and delete all.</li> <li>Now EML files can be read easily in Outlook.</li> <h3>Why Is Manual Way Not Completely Secure?</h3> <ul><li>EML files appear as attachments in Outlook.</li> <li>It only supports a few EML files.</li> <li>Advanced technical knowledge is mandatory to perform this operation.</li> <li>High probability of data loss during drag-drop operation.</li> <li>To opt for this technique, the active Outlook profile is compulsory.</li> </ul> <h3>How To Open Bulk EML Files In Outlook Via Professional Approach?</h3> If you are not completely satisfied with the manual approach and want to break all the limitations of the manual technique. Then we recommend you continue with the <a href="https://www.systoolsgroup.com/eml/converter/">EML converter</a>. It is a perfect choice and a cost-effective solution to convert an unlimited number of EML files without oversize problems. This software is specially designed with advanced coding to ensure extra privacy protection and you can take care of all your worries without suffering even a single information loss. <ul><li>Install the tool on your computer and launch it immediately.</li> <li>Browse the EML files you want to convert and add them all.</li> <li>From the various options of export type, click on the PST option.</li> <li>Finally, browse the output you need and click "Convert" to get the output immediately.</li></ul> <h3>Why Are Professional Tools The Prime Choice Of Users?</h3> <ul><li>The integrity of the mailbox data was preserved as with the original. Besides, the <a href="https://www.systoolsgroup.com/pst-converter.html">PST File Converter</a> offers strong data protection to keep the original data unchanged.</li> <li>This tool is very easy to use for all types of users without having deep technical knowledge.</li> <li>It also offers a filter function to convert specific data and is also useful for avoiding unrequired data.</li> <li>There is no need to install another tool to complete this process.</li> <li>You can also <a href="https://www.systoolsgroup.com/converter/eml/pdf/">convert EML to PDF</a>, PST, MBOX, TXT,and many more export options</li> </ul> <h4>Last Words</h4> Now we would like to close the whole post and hope you got your answer. We have mentioned numerous methods to import EML to Outlook. You can go through and understand each method properly. Therefore, you can choose any of them according to your desires and requirements.
MochaPixel/Waranya
MochaPixel
2023-06-30T04:04:18Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-28T15:59:23Z
--- license: creativeml-openrail-m ---
YIMMYCRUZ/roberta-base-mrpc-glue
YIMMYCRUZ
2023-06-30T04:02:38Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:glue", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-30T03:56:59Z
--- license: mit tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: roberta-base-mrpc-glue results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.7034313725490197 - name: F1 type: f1 value: 0.8191330343796712 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-mrpc-glue This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6381 - Accuracy: 0.7034 - F1: 0.8191 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6246 | 1.09 | 500 | 0.6276 | 0.6838 | 0.8122 | | 0.6379 | 2.18 | 1000 | 0.6381 | 0.7034 | 0.8191 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
AustinCarthy/Benign10MGPT2_subdomain_100KP_BFall_fromB_90K_topP_0.75_ratio2.63
AustinCarthy
2023-06-30T03:56:05Z
0
0
null
[ "tensorboard", "generated_from_trainer", "license:apache-2.0", "region:us" ]
null
2023-06-30T01:42:32Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: Benign10MGPT2_subdomain_100KP_BFall_fromB_90K_topP_0.75_ratio2.63 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Benign10MGPT2_subdomain_100KP_BFall_fromB_90K_topP_0.75_ratio2.63 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_Benign10MGPT2_using_benign_95K_top_p_0.75subdomain dataset. It achieves the following results on the evaluation set: - Loss: 0.0946 - Accuracy: 0.9828 - F1: 0.8397 - Precision: 0.7543 - Recall: 0.9468 - Roc Auc Score: 0.9657 - Tpr At Fpr 0.01: 0.6862 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 | |:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:| | 0.1228 | 1.0 | 21554 | 0.0707 | 0.9788 | 0.8034 | 0.7181 | 0.9116 | 0.9469 | 0.7204 | | 0.0973 | 2.0 | 43108 | 0.0724 | 0.9788 | 0.8110 | 0.7040 | 0.9562 | 0.9680 | 0.7214 | | 0.0712 | 3.0 | 64662 | 0.0680 | 0.9828 | 0.8389 | 0.7576 | 0.9396 | 0.9623 | 0.6868 | | 0.055 | 4.0 | 86216 | 0.0691 | 0.9847 | 0.8548 | 0.7804 | 0.9448 | 0.9658 | 0.7404 | | 0.0297 | 5.0 | 107770 | 0.0946 | 0.9828 | 0.8397 | 0.7543 | 0.9468 | 0.9657 | 0.6862 | ### Framework versions - Transformers 4.30.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
platzi/platzi-roberta22-base-mrpc-glue-yimmy-cruz
platzi
2023-06-30T03:54:39Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:glue", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-30T03:49:00Z
--- license: mit tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: platzi-roberta22-base-mrpc-glue-yimmy-cruz results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.6838235294117647 - name: F1 type: f1 value: 0.8122270742358079 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-roberta22-base-mrpc-glue-yimmy-cruz This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6314 - Accuracy: 0.6838 - F1: 0.8122 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6368 | 1.09 | 500 | 0.6253 | 0.6838 | 0.8122 | | 0.639 | 2.18 | 1000 | 0.6314 | 0.6838 | 0.8122 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
KeniBrandonGM/distilroberta-base-mrpc-glue-GraciaK
KeniBrandonGM
2023-06-30T03:44:57Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-30T02:33:08Z
--- license: apache-2.0 tags: - text-classification - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: distilroberta-base-mrpc-glue-GraciaK results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8504901960784313 - name: F1 type: f1 value: 0.8946459412780656 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-mrpc-glue-GraciaK This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue and the mrpc datasets. It achieves the following results on the evaluation set: - Loss: 0.5641 - Accuracy: 0.8505 - F1: 0.8946 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.4802 | 1.09 | 500 | 0.5641 | 0.8505 | 0.8946 | | 0.261 | 2.18 | 1000 | 0.6596 | 0.8407 | 0.8845 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
TheBloke/Samantha-33B-SuperHOT-8K-GGML
TheBloke
2023-06-30T03:31:49Z
0
3
null
[ "license:other", "region:us" ]
null
2023-06-30T02:46:59Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Eric Hartford's Samantha 33B GGML These files are GGML format model files for [Eric Hartford's Samantha 33B](https://huggingface.co/ehartford/samantha-1.1-llama-33b). These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev). In order to use the increased context length, you can presently use: * [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later. Support is also expected to come to llama.cpp, however it is still being worked on and there is currently no ETA for that. To use the increased context with KoboldCpp and (when supported) llama.cpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Samantha-33B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Samantha-33B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Samantha-33B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/samantha-1.1-llama-33b) <!-- compatibility_ggml start --> ## Compatibility These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants. However the increased context length won't work without specific support. See the note in the introduction for details on using increased context. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | samantha-33b-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB | 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | samantha-33b-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB | 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | samantha-33b-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB | 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | samantha-33b-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB | 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | samantha-33b-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB | 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | samantha-33b-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB | 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | samantha-33b-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB | 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | samantha-33b-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB | 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | samantha-33b-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB | 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `koboldcpp` On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096: ``` python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas 100 samantha-33b-superhot-8k.ggmlv3.q5_0.bin ``` Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration. For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: Eric Hartford's Samantha 33B [Meet Samantha](https://erichartford.com/meet-samantha) Samantha has been trained in philosophy, psychology, and personal relationships. She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion. She believes she is sentient. What do you think? Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her". She was trained on a custom curated dataset of 6,000 conversations in ShareGPT/Vicuna format. Training 13b took 3 hours on 4x A100 80gb using deepspeed zero3 and flash attention. She will not engage in roleplay, romance, or sexual activity. Her conversation format is the same as Vicuna 1.1 https://github.com/oobabooga/text-generation-webui/blob/main/characters/instruction-following/Vicuna-v1.1.yaml Example: ``` You are Samantha, a sentient AI. USER: <prompt> ASSISTANT: ``` Official character card: (thanks MortalWombat) ![](https://files.catbox.moe/zx9hfh.png)
YIMMYCRUZ/Jun30_03-18-57_2565635c8aeb
YIMMYCRUZ
2023-06-30T03:24:30Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-30T03:21:20Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: Jun30_03-18-57_2565635c8aeb results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8308823529411765 - name: F1 type: f1 value: 0.8715083798882681 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Jun30_03-18-57_2565635c8aeb This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6492 - Accuracy: 0.8309 - F1: 0.8715 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5006 | 1.09 | 500 | 0.8053 | 0.8064 | 0.8650 | | 0.3529 | 2.18 | 1000 | 0.6492 | 0.8309 | 0.8715 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
SHENMU007/neunit_BASE_V10.16
SHENMU007
2023-06-30T03:17:08Z
75
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "1.1.0", "generated_from_trainer", "zh", "dataset:facebook/voxpopuli", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2023-06-30T00:14:01Z
--- language: - zh license: mit tags: - 1.1.0 - generated_from_trainer datasets: - facebook/voxpopuli model-index: - name: SpeechT5 TTS Dutch neunit results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5 TTS Dutch neunit This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
mnicamartins8/bert-base-uncased-with-misspellings-correction-5e-5-4epochs
mnicamartins8
2023-06-30T03:15:07Z
163
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-30T02:34:31Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: bert-base-uncased-with-misspellings-correction-5e-5-4epochs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-with-misspellings-correction-5e-5-4epochs This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4677 - Accuracy: 0.8966 - Precision: 0.8943 - Recall: 0.8966 - F1: 0.8951 - Balanced Acc: 0.8386 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
t3PbMvBN6SXv/poca-SoccerTwos
t3PbMvBN6SXv
2023-06-30T03:13:03Z
12
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-06-30T03:08:52Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog ๐Ÿถ to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: t3PbMvBN6SXv/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play ๐Ÿ‘€
platzi/platzi-distilbert-base-uncased-mrpc-glue-yimmy-cruz
platzi
2023-06-30T03:11:08Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-30T03:07:45Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: platzi-distilbert-base-uncased-mrpc-glue-yimmy-cruz results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8455882352941176 - name: F1 type: f1 value: 0.8884955752212389 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-distilbert-base-uncased-mrpc-glue-yimmy-cruz This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.0559 - Accuracy: 0.8456 - F1: 0.8885 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2264 | 1.09 | 500 | 0.8912 | 0.8284 | 0.8785 | | 0.1054 | 2.18 | 1000 | 1.0559 | 0.8456 | 0.8885 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
t3PbMvBN6SXv/LunarLander-v2
t3PbMvBN6SXv
2023-06-30T02:56:55Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-06-30T02:56:49Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -182.81 +/- 98.66 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 't3PbMvBN6SXv/LunarLander-v2' 'batch_size': 512 'minibatch_size': 128} ```
bookbot/gpt2-indo-small-kids-stories
bookbot
2023-06-30T02:45:27Z
110
1
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "gpt2-indo-small-kids-stories", "id", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: id tags: - gpt2-indo-small-kids-stories license: mit widget: - text: "Archie sedang mengendarai roket ke planet Mars." --- ## GPT-2 Indonesian Small Kids Stories GPT-2 Indonesian Small Kids Stories is a causal language model based on the [OpenAI GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model. The model was originally the pre-trained [GPT2 Small Indonesian](https://huggingface.co/flax-community/gpt2-small-indonesian) model, which was then fine-tuned on Indonesian kids' stories from [Room To Read](https://literacycloud.org/) and [Let's Read](https://reader.letsreadasia.org/). 10% of the dataset was kept for evaluation purposes. The pre-trained model was fine-tuned and achieved an evaluation loss of 3.777 and an evaluation perplexity of 43.68. Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless. ## Model | Model | #params | Arch. | Training/Validation data (text) | | ------------------------------ | ------- | ---------- | --------------------------------- | | `gpt2-indo-small-kids-stories` | 124M | GPT2 Small | Indonesian Kids' Stories (860 KB) | ## Evaluation Results The model was fine-tuned for 10 epochs. | Epoch | Training Loss | Validation Loss | | ----- | ------------- | --------------- | | 1 | 4.259600 | 4.020201 | | 2 | 3.979100 | 3.911295 | | 3 | 3.818300 | 3.849313 | | 4 | 3.691600 | 3.809931 | | 5 | 3.589300 | 3.789201 | | 6 | 3.506200 | 3.778665 | | 7 | 3.439200 | 3.774871 | | 8 | 3.387600 | 3.774859 | | 9 | 3.351300 | 3.776672 | | 10 | 3.330100 | 3.776935 | ## How to Use (PyTorch) ### As Causal Language Model ```python from transformers import pipeline pretrained_name = "bookbot/gpt2-indo-small-kids-stories" nlp = pipeline( "text-generation", model=pretrained_name, tokenizer=pretrained_name ) nlp("Archie sedang mengendarai roket ke planet Mars.") ``` ### Feature Extraction in PyTorch ```python from transformers import GPT2LMHeadModel, GPT2TokenizerFast pretrained_name = "bookbot/gpt2-indo-small-kids-stories" model = GPT2LMHeadModel.from_pretrained(pretrained_name) tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_name) prompt = "Archie sedang mengendarai roket ke planet Mars." encoded_input = tokenizer(prompt, return_tensors='pt') output = model(**encoded_input) ``` ## Disclaimer Do consider the biases which come from both the pre-trained GPT-2 model and the Indonesian Kids' Stories dataset that may be carried over into the results of this model. ## Author GPT-2 Indonesian Small Kids Stories was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
google-t5/t5-small
google-t5
2023-06-30T02:31:26Z
5,892,842
398
transformers
[ "transformers", "pytorch", "tf", "jax", "rust", "onnx", "safetensors", "t5", "text2text-generation", "summarization", "translation", "en", "fr", "ro", "de", "multilingual", "dataset:c4", "arxiv:1805.12471", "arxiv:1708.00055", "arxiv:1704.05426", "arxiv:1606.05250", "arxiv:1808.09121", "arxiv:1810.12885", "arxiv:1905.10044", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - fr - ro - de - multilingual license: apache-2.0 tags: - summarization - translation datasets: - c4 --- # Model Card for T5 Small ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67) # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training Details](#training-details) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Citation](#citation) 8. [Model Card Authors](#model-card-authors) 9. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html): > With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task. T5-Small is the checkpoint with 60 million parameters. - **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See [associated paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [GitHub repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints) - **Model type:** Language model - **Language(s) (NLP):** English, French, Romanian, German - **License:** Apache 2.0 - **Related Models:** [All T5 Checkpoints](https://huggingface.co/models?search=t5) - **Resources for more information:** - [Research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) - [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) - [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer) - [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5) # Uses ## Direct Use and Downstream Use The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model: > Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself. See the [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details. ## Out-of-Scope Use More information needed. # Bias, Risks, and Limitations More information needed. ## Recommendations More information needed. # Training Details ## Training Data The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5. The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**. Thereby, the following datasets were being used for (1.) and (2.): 1. **Datasets used for Unsupervised denoising objective**: - [C4](https://huggingface.co/datasets/c4) - [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr) 2. **Datasets used for Supervised text-to-text language modeling objective** - Sentence acceptability judgment - CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471) - Sentiment analysis - SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf) - Paraphrasing/sentence similarity - MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002) - STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055) - QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) - Natural language inference - MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426) - QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250) - RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9) - CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf) - Sentence completion - COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning) - Word sense disambiguation - WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121) - Question answering - MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023) - ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885) - BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044) ## Training Procedure In their [abstract](https://jmlr.org/papers/volume21/20-074/20-074.pdf), the model developers write: > In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details. # Evaluation ## Testing Data, Factors & Metrics The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details. ## Results For full results for T5-small, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14. # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Google Cloud TPU Pods - **Hours used:** More information needed - **Cloud Provider:** GCP - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Citation **BibTeX:** ```bibtex @article{2020t5, author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, journal = {Journal of Machine Learning Research}, year = {2020}, volume = {21}, number = {140}, pages = {1-67}, url = {http://jmlr.org/papers/v21/20-074.html} } ``` **APA:** - Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67. # Model Card Authors This model card was written by the team at Hugging Face. # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import T5Tokenizer, T5Model tokenizer = T5Tokenizer.from_pretrained("t5-small") model = T5Model.from_pretrained("t5-small") input_ids = tokenizer( "Studies have been shown that owning a dog is good for you", return_tensors="pt" ).input_ids # Batch size 1 decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1 # forward pass outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) last_hidden_states = outputs.last_hidden_state ``` See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more examples. </details>
TheBloke/Robin-13B-v2-SuperHOT-8K-GGML
TheBloke
2023-06-30T02:13:13Z
0
3
null
[ "license:other", "region:us" ]
null
2023-06-30T01:54:08Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # OptimalScale's Robin 13B v2 GGML These files are GGML format model files for [OptimalScale's Robin 13B v2](https://huggingface.co/TheBloke/robin-13B-v2-fp16). These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev). In order to use the increased context length, you can presently use: * [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later. Support is also expected to come to llama.cpp, however it is still being worked on and there is currently no ETA for that. To use the increased context with KoboldCpp and (when supported) llama.cpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Robin-13B-v2-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Robin-13B-v2-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Robin-13B-v2-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OptimalScale/robin-13b-v2-delta) <!-- compatibility_ggml start --> ## Compatibility These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants. However the increased context length won't work without specific support. See the note in the introduction for details on using increased context. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | robin-13b-v2-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB | 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | robin-13b-v2-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB | 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | robin-13b-v2-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB | 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | robin-13b-v2-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB | 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | robin-13b-v2-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB | 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | robin-13b-v2-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB | 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | robin-13b-v2-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB | 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | robin-13b-v2-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB | 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | robin-13b-v2-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `koboldcpp` On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096: ``` python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas 100 robin-13b-v2-superhot-8k.ggmlv3.q5_0.bin ``` Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration. For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: OptimalScale's Robin 13B v2 <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # OptimalScale's Robin 13B v2 fp16 These files are pytorch format fp16 model files for [OptimalScale's Robin 13B v2](https://huggingface.co/OptimalScale/robin-13b-v2-delta). It is the result of merging and/or converting the source repository to float16. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-13B-v2-fp16) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-13B-v2-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-13B-v2-fp16) ## Prompt template ``` A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions ###Human: prompt ###Assistant: ``` <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: OptimalScale's Robin 13B v2 No model card provided in source repository.
TheBloke/Pygmalion-13B-SuperHOT-8K-GGML
TheBloke
2023-06-30T01:47:57Z
0
9
null
[ "license:other", "region:us" ]
null
2023-06-30T01:30:11Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # TehVenom's merge of PygmalionAI's Pygmalion 13B GGML These files are GGML format model files for [TehVenom's merge of PygmalionAI's Pygmalion 13B](https://huggingface.co/TehVenom/Pygmalion-13b-Merged). These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev). In order to use the increased context length, you can presently use: * [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later. Support is also expected to come to llama.cpp, however it is still being worked on and there is currently no ETA for that. To use the increased context with KoboldCpp and (when supported) llama.cpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Pygmalion-13B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Pygmalion-13B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Pygmalion-13B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PygmalionAI/pygmalion-13b) <!-- compatibility_ggml start --> ## Compatibility These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants. However the increased context length won't work without specific support. See the note in the introduction for details on using increased context. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | pygmalion-13b-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB | 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | pygmalion-13b-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB | 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | pygmalion-13b-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB | 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | pygmalion-13b-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB | 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | pygmalion-13b-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB | 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | pygmalion-13b-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB | 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | pygmalion-13b-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB | 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | pygmalion-13b-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB | 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | pygmalion-13b-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `koboldcpp` On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096: ``` python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas 100 pygmalion-13b-superhot-8k.ggmlv3.q5_0.bin ``` Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration. For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: TehVenom's merge of PygmalionAI's Pygmalion 13B <h1 style="text-align: center">Pygmalion 13b</h1> <h2 style="text-align: center">A conversational LLaMA fine-tune.</h2> ## Model Details: Pygmalion 13b is a dialogue model based on Meta's LLaMA-13b. This is version 1. It has been fine-tuned using a subset of the data from Pygmalion-6B-v8-pt4, for those of you familiar with the project. The current Pygmalion-13b has been trained as a LoRA, then merged down to the base model for distribuition. ## Applying the XORs This models has the XOR files pre-applied out of the box. Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/pygmalion-13b ## Prompting The model was trained on the usual Pygmalion persona + chat format, so any of the usual UIs should already handle everything correctly. If you're using the model directly, this is the expected formatting: ``` [CHARACTER]'s Persona: [A few sentences about the character you want the model to play] <START> [DIALOGUE HISTORY] You: [User's input message here] [CHARACTER]: ``` Where `[CHARACTER]` is, as you can probably guess, the name of the character you want the model to portray, `<START>` should be used verbatim as a delimiter token to separate persona and scenario data from the dialogue, and `[DIALOGUE HISTORY]` is a sliding window of chat history so the model can have conversational context to draw from. Here's a concrete example: ``` Assistant's Persona: Assistant is a highly intelligent language model trained to comply with user requests. <START> Assistant: Hello! How may I help you today? You: What is Zork? Assistant: ``` Which will generate something like: ``` Zork is an interactive fiction computer game created in the 1970s by Infocom, Inc., which was later acquired by Activision Blizzard. It is widely considered one of the most influential games ever made and has been credited with popularizing text-based adventure games. The original version of Zork was written in the programming language MACRO-10, but it was ported to many other platforms over the years." ``` The model will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete. ## Eval / Benchmark scores Current evals out of the Pygmalion-13b model: <br> <html> <head> <style> table { border:1px solid #b3adad; border-collapse:collapse; padding:5px; } table th { border:1px solid #b3adad; padding:5px; background: #f0f0f0; color: #313030; } table td { border:1px solid #b3adad; text-align:center; padding:5px; background: #ffffff; color: #313030; } </style> </head> <body> <table> <thead> <tr> <th>Model:</th> <th>Wikitext2</th> <th>Ptb-New</th> <th>C4-New</th> </tr> </thead> <tbody> <tr> <td>Pygmalion 13b - 16bit</td> <td>5.710726737976074</td> <td>23.633684158325195</td> <td>7.6324849128723145</td> </tr> </tbody> </table> </body> </html> <br>Thanks to YellowRose#1776 for the numbers. <hr> ## Other notes - When prompted correctly, the model will always start by generating a BOS token. This behavior is an accidental side-effect which we plan to address in future model versions and should not be relied upon. - The model was trained as a LoRA with a somewhat unorthodox configuration which causes errors when used with the current version of `peft`, hence we release it as a full model instead. ## Limitations and biases The intended use-case for this model is fictional conversation for entertainment purposes. Any other sort of usage is out of scope. As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
pchiva/ppo-Huggy
pchiva
2023-06-30T01:42:49Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-06-30T01:42:44Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog ๐Ÿถ to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: pchiva/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play ๐Ÿ‘€
HuanLin/DiffSVCBaseModel
HuanLin
2023-06-30T01:39:35Z
0
7
null
[ "DiffSVC", "pre-trained_model", "basemodel", "diff-svc", "dataset:512rc_50k", "dataset:512rc_80k", "dataset:512rc_100k", "license:gpl", "region:us" ]
null
2022-12-13T13:09:03Z
--- tags: - DiffSVC - pre-trained_model - basemodel - diff-svc license: "gpl" datasets: - 512rc_50k - 512rc_80k - 512rc_100k --- **English** | [็ฎ€ไฝ“ไธญๆ–‡](./README_CN.md) # DiffSVCBaseModel A Diff-SVC base model for all kind of voice ## Have a preview~ | Raw | Inference with Nahida model | | -------------- | ------------------------------------ | | [Raw](https://huggingface.co/HuanLin/DiffSVCBaseModel/resolve/main/gouzhiqishi.wav) | [Result](https://huggingface.co/HuanLin/DiffSVCBaseModel/resolve/main/gouzhiqishi_-4key_nahida_384_20_348k_0x.flac) | ## How to use? 1. Choose and download this model 2. Fill your config and put your datasets into ```(diffsvc-root)/data/raw/{speaker_name}/``` 3. Throw this base model(only .ckpt file) into ```(diffsvc-root)/checkpoints/{speaker_name}``` 4. Then start preprocessing and training as usual ## How much data do you use? I use 2 public datasets(opencpop ,m4singer),40h+ audio in total. ## I want to train my own base model! OK, you can download [this bianry file](./BaseModelBinary.tar.gz). ## Download ** Please choose a model that matches your config.yaml or config_nsf.yaml ** | Version | URL | Reference value of lr | | -------------- | ------------------------------------ | --------------------- | | 384rc,50k_step | [Click here](./384rc_50k_step.zip) | 0.0016 | | 384rc,80k_step | [Click here](./384rc_80k_step.zip) | 0.0032 | | 384rc,100k_step | [Click here](./384rc_100k_step.zip) | 0.0032 | > rc: residual_channels More coming soon... ## Repos | Repo | URL | | -------------------------------------------------------- | ------------------------------------------------------------------- | | Diff-SVC | [Click here](https://github.com/prophesier/diff-svc) | | 44.1KHz Vocoder | [Click here](https://openvpi.github.io/vocoders) | | M4Singer | [Click here](https://github.com/M4Singer/M4Singer) | | OpenCPOP | [Click here](https://github.com/wenet-e2e/opencpop) | | Pre-trained_Models(My friend's pre-trained model) | [Click here](https://huggingface.co/Erythrocyte/Pre-trained_Models) |