Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
liquid9212/nd2jbc1
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T19:22:22+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Cantanese This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 16.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3020 - Wer: 56.0901 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.0965 | 1.4174 | 1000 | 0.2585 | 58.6483 | | 0.0307 | 2.8349 | 2000 | 0.2716 | 57.0065 | | 0.0035 | 4.2523 | 3000 | 0.2918 | 56.2810 | | 0.001 | 5.6697 | 4000 | 0.3020 | 56.0901 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["yue"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_16_0"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Small Cantanese", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 16.0", "type": "mozilla-foundation/common_voice_16_0", "config": "yue", "split": "None", "args": "config: yue, split: test"}, "metrics": [{"type": "wer", "value": 56.09011072928599, "name": "Wer"}]}]}]}
Practice-canto/whisper-small-cantonese_26-04-2024-1922
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "yue", "dataset:mozilla-foundation/common_voice_16_0", "base_model:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-26T19:22:26+00:00
null
null
{}
Krompirko/rosa
null
[ "region:us" ]
null
2024-04-26T19:23:12+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_notata-seqsight_4096_512_46M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset. It achieves the following results on the evaluation set: - Loss: 0.1358 - F1 Score: 0.9555 - Accuracy: 0.9555 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.2044 | 0.6 | 200 | 0.1225 | 0.9555 | 0.9555 | | 0.126 | 1.2 | 400 | 0.1318 | 0.9485 | 0.9486 | | 0.1202 | 1.81 | 600 | 0.1154 | 0.9561 | 0.9561 | | 0.1036 | 2.41 | 800 | 0.1075 | 0.9604 | 0.9604 | | 0.1066 | 3.01 | 1000 | 0.1122 | 0.9608 | 0.9608 | | 0.0987 | 3.61 | 1200 | 0.1206 | 0.9568 | 0.9568 | | 0.097 | 4.22 | 1400 | 0.1066 | 0.9612 | 0.9612 | | 0.0938 | 4.82 | 1600 | 0.0984 | 0.9614 | 0.9614 | | 0.0909 | 5.42 | 1800 | 0.1044 | 0.9602 | 0.9602 | | 0.0877 | 6.02 | 2000 | 0.0958 | 0.9631 | 0.9631 | | 0.0886 | 6.63 | 2200 | 0.0928 | 0.9644 | 0.9644 | | 0.0836 | 7.23 | 2400 | 0.1026 | 0.9632 | 0.9633 | | 0.0779 | 7.83 | 2600 | 0.1011 | 0.9642 | 0.9642 | | 0.0797 | 8.43 | 2800 | 0.0986 | 0.9640 | 0.9640 | | 0.0789 | 9.04 | 3000 | 0.1015 | 0.9651 | 0.9651 | | 0.0733 | 9.64 | 3200 | 0.0952 | 0.9668 | 0.9668 | | 0.0714 | 10.24 | 3400 | 0.1080 | 0.9634 | 0.9634 | | 0.0737 | 10.84 | 3600 | 0.0922 | 0.9668 | 0.9668 | | 0.0651 | 11.45 | 3800 | 0.0968 | 0.9659 | 0.9659 | | 0.0676 | 12.05 | 4000 | 0.1078 | 0.9642 | 0.9642 | | 0.0662 | 12.65 | 4200 | 0.0944 | 0.9661 | 0.9661 | | 0.0638 | 13.25 | 4400 | 0.1070 | 0.9655 | 0.9655 | | 0.0614 | 13.86 | 4600 | 0.0990 | 0.9665 | 0.9665 | | 0.0598 | 14.46 | 4800 | 0.1029 | 0.9681 | 0.9682 | | 0.0568 | 15.06 | 5000 | 0.1000 | 0.9661 | 0.9661 | | 0.055 | 15.66 | 5200 | 0.1026 | 0.9666 | 0.9666 | | 0.0544 | 16.27 | 5400 | 0.1151 | 0.9653 | 0.9653 | | 0.0525 | 16.87 | 5600 | 0.1003 | 0.9680 | 0.9680 | | 0.05 | 17.47 | 5800 | 0.1067 | 0.9659 | 0.9659 | | 0.0527 | 18.07 | 6000 | 0.1135 | 0.9661 | 0.9661 | | 0.0474 | 18.67 | 6200 | 0.1090 | 0.9674 | 0.9674 | | 0.0498 | 19.28 | 6400 | 0.1081 | 0.9665 | 0.9665 | | 0.044 | 19.88 | 6600 | 0.1199 | 0.9676 | 0.9676 | | 0.0452 | 20.48 | 6800 | 0.1131 | 0.9663 | 0.9663 | | 0.0453 | 21.08 | 7000 | 0.1190 | 0.9655 | 0.9655 | | 0.0417 | 21.69 | 7200 | 0.1176 | 0.9651 | 0.9651 | | 0.0385 | 22.29 | 7400 | 0.1210 | 0.9665 | 0.9665 | | 0.0401 | 22.89 | 7600 | 0.1178 | 0.9663 | 0.9663 | | 0.0404 | 23.49 | 7800 | 0.1174 | 0.9668 | 0.9668 | | 0.038 | 24.1 | 8000 | 0.1168 | 0.9674 | 0.9674 | | 0.037 | 24.7 | 8200 | 0.1268 | 0.9668 | 0.9668 | | 0.0369 | 25.3 | 8400 | 0.1228 | 0.9665 | 0.9665 | | 0.0361 | 25.9 | 8600 | 0.1224 | 0.9674 | 0.9674 | | 0.0349 | 26.51 | 8800 | 0.1238 | 0.9665 | 0.9665 | | 0.0358 | 27.11 | 9000 | 0.1228 | 0.9672 | 0.9672 | | 0.033 | 27.71 | 9200 | 0.1271 | 0.9678 | 0.9678 | | 0.0338 | 28.31 | 9400 | 0.1239 | 0.9678 | 0.9678 | | 0.0337 | 28.92 | 9600 | 0.1247 | 0.9670 | 0.9670 | | 0.033 | 29.52 | 9800 | 0.1249 | 0.9674 | 0.9674 | | 0.0315 | 30.12 | 10000 | 0.1260 | 0.9678 | 0.9678 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_4096_512_46M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_4096_512_46M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T19:23:23+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_all-seqsight_4096_512_46M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.4112 - F1 Score: 0.8150 - Accuracy: 0.8150 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5196 | 0.54 | 200 | 0.4708 | 0.7768 | 0.7769 | | 0.4716 | 1.08 | 400 | 0.4598 | 0.7880 | 0.7880 | | 0.4517 | 1.62 | 600 | 0.4506 | 0.7938 | 0.7939 | | 0.4415 | 2.16 | 800 | 0.4477 | 0.7969 | 0.7971 | | 0.4363 | 2.7 | 1000 | 0.4394 | 0.8020 | 0.8020 | | 0.4319 | 3.24 | 1200 | 0.4358 | 0.8047 | 0.8047 | | 0.4314 | 3.78 | 1400 | 0.4343 | 0.8023 | 0.8024 | | 0.423 | 4.32 | 1600 | 0.4372 | 0.8047 | 0.8047 | | 0.4253 | 4.86 | 1800 | 0.4352 | 0.8042 | 0.8042 | | 0.4206 | 5.41 | 2000 | 0.4362 | 0.8038 | 0.8039 | | 0.4178 | 5.95 | 2200 | 0.4362 | 0.8031 | 0.8032 | | 0.4225 | 6.49 | 2400 | 0.4350 | 0.8012 | 0.8014 | | 0.4139 | 7.03 | 2600 | 0.4444 | 0.7954 | 0.7959 | | 0.4152 | 7.57 | 2800 | 0.4349 | 0.8046 | 0.8047 | | 0.4106 | 8.11 | 3000 | 0.4325 | 0.8047 | 0.8047 | | 0.4109 | 8.65 | 3200 | 0.4320 | 0.8060 | 0.8061 | | 0.4135 | 9.19 | 3400 | 0.4323 | 0.8050 | 0.8051 | | 0.4087 | 9.73 | 3600 | 0.4303 | 0.8039 | 0.8041 | | 0.4125 | 10.27 | 3800 | 0.4304 | 0.8057 | 0.8057 | | 0.4096 | 10.81 | 4000 | 0.4388 | 0.8005 | 0.8008 | | 0.4056 | 11.35 | 4200 | 0.4316 | 0.8054 | 0.8054 | | 0.4081 | 11.89 | 4400 | 0.4267 | 0.8065 | 0.8066 | | 0.4043 | 12.43 | 4600 | 0.4385 | 0.7998 | 0.8002 | | 0.4051 | 12.97 | 4800 | 0.4293 | 0.8064 | 0.8064 | | 0.4047 | 13.51 | 5000 | 0.4303 | 0.8044 | 0.8046 | | 0.4027 | 14.05 | 5200 | 0.4253 | 0.8059 | 0.8059 | | 0.4024 | 14.59 | 5400 | 0.4291 | 0.8030 | 0.8032 | | 0.4027 | 15.14 | 5600 | 0.4280 | 0.8030 | 0.8032 | | 0.4003 | 15.68 | 5800 | 0.4260 | 0.8074 | 0.8074 | | 0.3969 | 16.22 | 6000 | 0.4248 | 0.8081 | 0.8081 | | 0.4046 | 16.76 | 6200 | 0.4225 | 0.8076 | 0.8076 | | 0.4017 | 17.3 | 6400 | 0.4248 | 0.8069 | 0.8069 | | 0.4005 | 17.84 | 6600 | 0.4247 | 0.8064 | 0.8064 | | 0.3962 | 18.38 | 6800 | 0.4272 | 0.8074 | 0.8076 | | 0.3982 | 18.92 | 7000 | 0.4214 | 0.8094 | 0.8095 | | 0.3917 | 19.46 | 7200 | 0.4235 | 0.8091 | 0.8091 | | 0.4023 | 20.0 | 7400 | 0.4240 | 0.8047 | 0.8047 | | 0.3966 | 20.54 | 7600 | 0.4252 | 0.8074 | 0.8074 | | 0.3947 | 21.08 | 7800 | 0.4236 | 0.8089 | 0.8090 | | 0.3967 | 21.62 | 8000 | 0.4241 | 0.8064 | 0.8064 | | 0.3944 | 22.16 | 8200 | 0.4229 | 0.8078 | 0.8078 | | 0.4017 | 22.7 | 8400 | 0.4208 | 0.8084 | 0.8084 | | 0.3934 | 23.24 | 8600 | 0.4210 | 0.8061 | 0.8061 | | 0.3929 | 23.78 | 8800 | 0.4220 | 0.8078 | 0.8078 | | 0.3926 | 24.32 | 9000 | 0.4224 | 0.8066 | 0.8066 | | 0.3922 | 24.86 | 9200 | 0.4219 | 0.8068 | 0.8068 | | 0.4005 | 25.41 | 9400 | 0.4213 | 0.8093 | 0.8093 | | 0.3897 | 25.95 | 9600 | 0.4218 | 0.8083 | 0.8083 | | 0.3949 | 26.49 | 9800 | 0.4218 | 0.8088 | 0.8088 | | 0.3903 | 27.03 | 10000 | 0.4219 | 0.8089 | 0.8090 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_4096_512_46M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_4096_512_46M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T19:23:23+00:00
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) saiga_llama3_8b - GGUF - Model creator: https://huggingface.co/IlyaGusev/ - Original model: https://huggingface.co/IlyaGusev/saiga_llama3_8b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [saiga_llama3_8b.Q2_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q2_K.gguf) | Q2_K | 2.96GB | | [saiga_llama3_8b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [saiga_llama3_8b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.IQ3_S.gguf) | IQ3_S | 3.43GB | | [saiga_llama3_8b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [saiga_llama3_8b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.IQ3_M.gguf) | IQ3_M | 3.52GB | | [saiga_llama3_8b.Q3_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q3_K.gguf) | Q3_K | 3.74GB | | [saiga_llama3_8b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [saiga_llama3_8b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [saiga_llama3_8b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [saiga_llama3_8b.Q4_0.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q4_0.gguf) | Q4_0 | 4.34GB | | [saiga_llama3_8b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [saiga_llama3_8b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [saiga_llama3_8b.Q4_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q4_K.gguf) | Q4_K | 4.58GB | | [saiga_llama3_8b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [saiga_llama3_8b.Q4_1.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q4_1.gguf) | Q4_1 | 4.78GB | | [saiga_llama3_8b.Q5_0.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q5_0.gguf) | Q5_0 | 5.21GB | | [saiga_llama3_8b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [saiga_llama3_8b.Q5_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q5_K.gguf) | Q5_K | 5.34GB | | [saiga_llama3_8b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [saiga_llama3_8b.Q5_1.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q5_1.gguf) | Q5_1 | 5.65GB | | [saiga_llama3_8b.Q6_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q6_K.gguf) | Q6_K | 6.14GB | Original model description: --- language: - ru datasets: - IlyaGusev/saiga_scored license: other license_name: llama3 license_link: https://llama.meta.com/llama3/license/ --- # Saiga/Llama3 8B, Russian Llama-3-based chatbot Based on [Llama-3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). Llama.cpp version: [link](https://huggingface.co/IlyaGusev/saiga_llama3_8b_gguf) **ОСТОРОЖНО! WARNING! LET OP!** I've changed the prompt format from ChatML to **the original Llama-3 format in v4**. Don't forget to switch formats! **v4**: LLama-3 prompt format: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<|eot_id|><|start_header_id|>user<|end_header_id|> Как дела?<|eot_id|><|start_header_id|>assistant<|end_header_id|> Отлично, а у тебя?<|eot_id|><|start_header_id|>user<|end_header_id|> Шикарно. Как пройти в библиотеку?<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` **v2, v3**: ChatML prompt format: ``` <|im_start|>system Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<|im_end|> <|im_start|>user Как дела?<|im_end|> <|im_start|>assistant Отлично, а у тебя?<|im_end|> <|im_start|>user Шикарно. Как пройти в библиотеку?<|im_end|> <|im_start|>assistant ``` Examples: ``` User: Почему трава зеленая? Saiga: Зеленый цвет травы обусловлен наличием хлорофилла, пигмента, который поглощает свет и передает энергию для фотосинтеза. Хлорофилл содержится в клетках листьев растений и играет ключевую роль в процессе фотосинтеза, когда растения используют солнечный свет для превращения воды и углекислого газа в органические вещества, такие как глюкоза и олигоэфиры. Хлорофилл состоит из двух основных форм: хлорофилла а и хлорофилла б. Хлорофилл а поглощает свет с длиной волны около 430 нанометров (нм), что соответствует синему или голубому свету, а хлорофилл б поглощает свет с длиной волны около 660 нм, что соответствует красному свету. Когда свет поглощается хлорофиллом, он передается в молекулы воды и углекислого газа, которые используются для фотосинтеза. Таким образом, хлорофилл является основным компонентом, ответственным за зеленый цвет листьев растений, так как он поглощает свет и передает его энергию для фотосинтеза. ``` ``` User: Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч Saiga: Однажды в солнечную погоду на поле для игры в футбол приехала группа друзей, чтобы провести время вместе. В числе их был и Таня, которая была страстной футбольной фанаткой и всегда радовал своих друзей своим энтузиазмом и умением забивать мячи. В этот день, как обычно, друзья решили провести соревнование между собой, чтобы определить кто из них является лучшим футболистом. Таня была очень уверена в своих силах и готовилась к этому матчу с особой серьёзностью. Когда все друзья собрались на поле, они увидели, что перед ними стоял огромный мяч, который должен был стать предметом состязания. Мяч был огромным и тяжелым, и его размеры были необычайно большими по сравнению с обычными мячами, которые используются в футболе. Таня была первая, кто решил начать игру. Она подошла к мячу и начала его удерживать, стараясь выдержать его вес и силу. Но мяч оказался настолько тяжелым, что Таня не смогла удержать его и он упал на землю. Друзья посмеялись над ее неудачей, но Таня не отчаивалась и продолжила пытаться удержать мяч. Она стала использовать все свои силы и умения, чтобы выдержать его вес и силу. Наконец, после долгих усилий, она смогла удержать мяч и начала его бросать в сторону. Мяч летел высоко вверх, и друзья смотрели, как он пролетает над полем. Но мяч неожиданно повернул и стал лететь обратно к Тане. Она успела поймать его и продолжила играть, используя все свои навыки и умения. ``` v4: - [1cc945d4ca2c7901cf989e7edaac52ab24f1a7dd](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/1cc945d4ca2c7901cf989e7edaac52ab24f1a7dd) - dataset: [saiga_scored](https://huggingface.co/datasets/IlyaGusev/saiga_scored), scores >= 8, c66032920556c0f21bbbed05e7e04433ec954c3d - wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/dcbs9ttt) v3: - [c588356cd60bdee54d52c2dd5a2445acca8aa5c3](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/c588356cd60bdee54d52c2dd5a2445acca8aa5c3) - dataset: [saiga_scored](https://huggingface.co/datasets/IlyaGusev/saiga_scored), scores >= 8, d51cf8060bdc90023da8cf1c3f113f9193d6569b - wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/ltoqdsal) v2: - [ae61b4f9b34fac9856d361ea78c66284a00e4f0b](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/ae61b4f9b34fac9856d361ea78c66284a00e4f0b) - dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a - wandb [link](https://wandb.ai/ilyagusev/huggingface/runs/r6u5juyk) - 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4 - Datasets merging script: [create_short_chat_set.py](https://github.com/IlyaGusev/rulm/blob/d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a/self_instruct/src/data_processing/create_short_chat_set.py) # Evaluation * Dataset: https://github.com/IlyaGusev/rulm/blob/master/self_instruct/data/tasks.jsonl * Framework: https://github.com/tatsu-lab/alpaca_eval * Evaluator: alpaca_eval_cot_gpt4_turbo_fn | model | length_controlled_winrate | win_rate | standard_error | avg_length | |-----|-----|-----|-----|-----| |chatgpt_4_turbo | 76.04 | 90.00 |1.46 | 1270 | |chatgpt_3_5_turbo | 50.00 | 50.00 | 0.00 | 536 | |saiga_llama3_8b, v4 | 43.64 | 65.90 | 2.31 | 1200 | |saiga_llama3_8b, v3 | 36.97 | 61.08 | 2.38 | 1162 | |saiga_llama3_8b, v2 | 33.07 | 48.19 | 2.45 | 1166 | |saiga_mistral_7b | 23.38 | 35.99 | 2.34 | 949 |
{}
RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf
null
[ "gguf", "region:us" ]
null
2024-04-26T19:23:31+00:00
null
null
{}
AV3RT/4V3RT
null
[ "region:us" ]
null
2024-04-26T19:23:37+00:00
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) saiga_gemma_9b - GGUF - Model creator: https://huggingface.co/IlyaGusev/ - Original model: https://huggingface.co/IlyaGusev/saiga_gemma_9b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [saiga_gemma_9b.Q2_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q2_K.gguf) | Q2_K | 3.24GB | | [saiga_gemma_9b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.IQ3_XS.gguf) | IQ3_XS | 3.54GB | | [saiga_gemma_9b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.IQ3_S.gguf) | IQ3_S | 3.71GB | | [saiga_gemma_9b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q3_K_S.gguf) | Q3_K_S | 3.71GB | | [saiga_gemma_9b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.IQ3_M.gguf) | IQ3_M | 3.82GB | | [saiga_gemma_9b.Q3_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q3_K.gguf) | Q3_K | 4.07GB | | [saiga_gemma_9b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q3_K_M.gguf) | Q3_K_M | 4.07GB | | [saiga_gemma_9b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q3_K_L.gguf) | Q3_K_L | 4.39GB | | [saiga_gemma_9b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.IQ4_XS.gguf) | IQ4_XS | 4.48GB | | [saiga_gemma_9b.Q4_0.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q4_0.gguf) | Q4_0 | 4.67GB | | [saiga_gemma_9b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.IQ4_NL.gguf) | IQ4_NL | 4.69GB | | [saiga_gemma_9b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q4_K_S.gguf) | Q4_K_S | 4.7GB | | [saiga_gemma_9b.Q4_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q4_K.gguf) | Q4_K | 4.96GB | | [saiga_gemma_9b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q4_K_M.gguf) | Q4_K_M | 4.96GB | | [saiga_gemma_9b.Q4_1.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q4_1.gguf) | Q4_1 | 5.12GB | | [saiga_gemma_9b.Q5_0.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q5_0.gguf) | Q5_0 | 5.57GB | | [saiga_gemma_9b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q5_K_S.gguf) | Q5_K_S | 5.57GB | | [saiga_gemma_9b.Q5_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q5_K.gguf) | Q5_K | 5.72GB | | [saiga_gemma_9b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q5_K_M.gguf) | Q5_K_M | 5.72GB | | [saiga_gemma_9b.Q5_1.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q5_1.gguf) | Q5_1 | 6.02GB | | [saiga_gemma_9b.Q6_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q6_K.gguf) | Q6_K | 6.53GB | Original model description: --- language: - ru datasets: - IlyaGusev/ru_turbo_saiga - IlyaGusev/ru_sharegpt_cleaned - IlyaGusev/oasst1_ru_main_branch - IlyaGusev/gpt_roleplay_realm - lksy/ru_instruct_gpt4 --- # Saiga/Gemma 9B, Russian Gemma-based chatbot Based on [Gemma 7B](https://huggingface.co/google/gemma-7b). Training Colab: [link](https://colab.research.google.com/drive/1O7F7Q3IQYh-v7EfsdRwMDMfUnOYZ3DEh). ChatML prompt format: ``` <|im_start|>system Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<|im_end|> <|im_start|>user Привет!<|im_end|> <|im_start|>assistant Отлично, а ты?<|im_end|> <|im_start|>user Шикарно. Как пройти в библиотеку?<|im_end|> <|im_start|>assistant ``` Examples: ``` User: Почему трава зеленая? Saiga: Зеленый цвет травы обусловлен наличием в её клетках пигмента хлорофилла, который играет важную роль в процессе фотосинтеза – преобразования солнечного света в энергию, используемую для роста и развития растений. Хлорофилл поглощает свет в определенных диапазонах длин волн, в том числе в зеленой области спектра, и отражает свет в других диапазонах, что и обуславливает зелёный цвет. ``` ``` User: Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч Saiga: Однажды, в далекой стране, жила маленькая девочка по имени Таня. Она была очень активной и любила играть в мяч. Вместе со своими друзьями они проводили дни, играя в футбол, баскетбол и волейбол. Таня была очень талантливой и ловкой, и она часто выигрывала в играх с другими детьми. Но однажды, во время игры в баскетбол, она получила сильную травму. Ей пришлось провести несколько недель в больнице, и она была очень грустной и унылой. Её друзья не могли понять, как помочь Тане справиться с этой ситуацией. Они пытались поддерживать её, но Таня была слишком подавлена и не хотела играть в мяч. Однажды, Таня увидела, как её друзья играют в мяч. Она не могла не смотреть, как они ловко передают мяч друг другу и забивают голы. Таня почувствовала, как её сердце разрывается от желания играть вместе с ними. Она решила, что ей нужно вернуться в игру, чтобы снова почувствовать радость игры. Она начала медленно восстанавливаться и постепенно возвращалась к своему любимому занятию - игре в мяч. Таня стала играть в мяч снова, но теперь она была более осторожна и внимательна к своему здоровью. Её друзья были очень рады видеть, как Таня возвращается к своей привычной жизни. С тех пор, Таня стала настоящей звездой на поле. Она стала лучшей игроком в своей команде и помогла им выиграть множество игр. Её друзья были горды, что Таня вернулась к своему любимому занятию и снова стала такой же активной и ловкой, как раньше. Таня и её друзья продолжали играть в мяч и проводили чудесные дни вместе. Они были счастливы, что Таня вернулась к своему любимому занятию и снова стала частью команды. Таня и её друзья продолжают играть в мяч и вместе они создают прекрасные воспоминания. ``` v1: - dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a - wandb [link](https://wandb.ai/ilyagusev/gemma_test/runs/k7u3uw5i) - 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4 - Datasets merging script: [create_short_chat_set.py](https://github.com/IlyaGusev/rulm/blob/d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a/self_instruct/src/data_processing/create_short_chat_set.py)
{}
RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf
null
[ "gguf", "region:us" ]
null
2024-04-26T19:25:57+00:00
null
null
{}
Wodeyuanbukongda/wider_face_no_face_central_coordinate
null
[ "region:us" ]
null
2024-04-26T19:26:44+00:00
null
null
{}
jay-was-unavailable/mahitoENG
null
[ "region:us" ]
null
2024-04-26T19:27:03+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_all-seqsight_4096_512_46M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.4053 - F1 Score: 0.8177 - Accuracy: 0.8177 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.4984 | 0.54 | 200 | 0.4572 | 0.7900 | 0.7900 | | 0.4522 | 1.08 | 400 | 0.4546 | 0.7940 | 0.7943 | | 0.433 | 1.62 | 600 | 0.4402 | 0.7991 | 0.7992 | | 0.4282 | 2.16 | 800 | 0.4399 | 0.8021 | 0.8022 | | 0.4224 | 2.7 | 1000 | 0.4331 | 0.8047 | 0.8047 | | 0.4188 | 3.24 | 1200 | 0.4272 | 0.8059 | 0.8059 | | 0.4157 | 3.78 | 1400 | 0.4306 | 0.8033 | 0.8034 | | 0.4083 | 4.32 | 1600 | 0.4295 | 0.8057 | 0.8057 | | 0.4128 | 4.86 | 1800 | 0.4294 | 0.8044 | 0.8044 | | 0.405 | 5.41 | 2000 | 0.4283 | 0.8038 | 0.8039 | | 0.4044 | 5.95 | 2200 | 0.4296 | 0.8066 | 0.8066 | | 0.4066 | 6.49 | 2400 | 0.4275 | 0.8059 | 0.8059 | | 0.3998 | 7.03 | 2600 | 0.4304 | 0.8066 | 0.8068 | | 0.399 | 7.57 | 2800 | 0.4296 | 0.8057 | 0.8057 | | 0.3953 | 8.11 | 3000 | 0.4262 | 0.8102 | 0.8103 | | 0.3932 | 8.65 | 3200 | 0.4228 | 0.8074 | 0.8074 | | 0.3965 | 9.19 | 3400 | 0.4271 | 0.8083 | 0.8083 | | 0.3927 | 9.73 | 3600 | 0.4225 | 0.8090 | 0.8091 | | 0.3945 | 10.27 | 3800 | 0.4243 | 0.8096 | 0.8096 | | 0.3903 | 10.81 | 4000 | 0.4342 | 0.8025 | 0.8027 | | 0.3864 | 11.35 | 4200 | 0.4280 | 0.8092 | 0.8093 | | 0.3885 | 11.89 | 4400 | 0.4219 | 0.8126 | 0.8127 | | 0.3863 | 12.43 | 4600 | 0.4343 | 0.8074 | 0.8076 | | 0.3847 | 12.97 | 4800 | 0.4253 | 0.8105 | 0.8105 | | 0.3824 | 13.51 | 5000 | 0.4253 | 0.8108 | 0.8110 | | 0.3834 | 14.05 | 5200 | 0.4210 | 0.8111 | 0.8111 | | 0.3793 | 14.59 | 5400 | 0.4262 | 0.8096 | 0.8096 | | 0.3813 | 15.14 | 5600 | 0.4269 | 0.8087 | 0.8088 | | 0.3781 | 15.68 | 5800 | 0.4244 | 0.8111 | 0.8111 | | 0.3746 | 16.22 | 6000 | 0.4248 | 0.8103 | 0.8105 | | 0.3796 | 16.76 | 6200 | 0.4194 | 0.8147 | 0.8147 | | 0.3778 | 17.3 | 6400 | 0.4247 | 0.8135 | 0.8135 | | 0.3763 | 17.84 | 6600 | 0.4247 | 0.8137 | 0.8137 | | 0.3707 | 18.38 | 6800 | 0.4286 | 0.8093 | 0.8096 | | 0.3751 | 18.92 | 7000 | 0.4188 | 0.8119 | 0.8120 | | 0.3667 | 19.46 | 7200 | 0.4234 | 0.8130 | 0.8130 | | 0.3777 | 20.0 | 7400 | 0.4236 | 0.8130 | 0.8130 | | 0.3697 | 20.54 | 7600 | 0.4249 | 0.8133 | 0.8133 | | 0.3703 | 21.08 | 7800 | 0.4250 | 0.8133 | 0.8133 | | 0.3693 | 21.62 | 8000 | 0.4215 | 0.8128 | 0.8128 | | 0.3679 | 22.16 | 8200 | 0.4198 | 0.8110 | 0.8110 | | 0.3734 | 22.7 | 8400 | 0.4200 | 0.8123 | 0.8123 | | 0.3659 | 23.24 | 8600 | 0.4188 | 0.8127 | 0.8127 | | 0.3653 | 23.78 | 8800 | 0.4210 | 0.8131 | 0.8132 | | 0.3674 | 24.32 | 9000 | 0.4202 | 0.8137 | 0.8137 | | 0.3646 | 24.86 | 9200 | 0.4201 | 0.8139 | 0.8139 | | 0.3721 | 25.41 | 9400 | 0.4193 | 0.8154 | 0.8154 | | 0.3613 | 25.95 | 9600 | 0.4210 | 0.8150 | 0.8150 | | 0.3673 | 26.49 | 9800 | 0.4205 | 0.8150 | 0.8150 | | 0.361 | 27.03 | 10000 | 0.4209 | 0.8140 | 0.8140 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_4096_512_46M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_4096_512_46M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T19:28:24+00:00
text2text-generation
transformers
{}
dangvohiep/wikit5
null
[ "transformers", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T19:29:22+00:00
null
null
{}
mculver/pokemon-lora
null
[ "region:us" ]
null
2024-04-26T19:29:59+00:00
null
null
{}
MoTalaat/movie-falcon_V2
null
[ "region:us" ]
null
2024-04-26T19:34:07+00:00
null
null
{}
thorirhrafn/gpt1B_RLHF_model
null
[ "region:us" ]
null
2024-04-26T19:34:15+00:00
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Starling-LM-7B-beta - GGUF - Model creator: https://huggingface.co/Nexusflow/ - Original model: https://huggingface.co/Nexusflow/Starling-LM-7B-beta/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Starling-LM-7B-beta.Q2_K.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q2_K.gguf) | Q2_K | 2.53GB | | [Starling-LM-7B-beta.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [Starling-LM-7B-beta.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.IQ3_S.gguf) | IQ3_S | 2.96GB | | [Starling-LM-7B-beta.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [Starling-LM-7B-beta.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.IQ3_M.gguf) | IQ3_M | 3.06GB | | [Starling-LM-7B-beta.Q3_K.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q3_K.gguf) | Q3_K | 3.28GB | | [Starling-LM-7B-beta.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [Starling-LM-7B-beta.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [Starling-LM-7B-beta.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [Starling-LM-7B-beta.Q4_0.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q4_0.gguf) | Q4_0 | 3.83GB | | [Starling-LM-7B-beta.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [Starling-LM-7B-beta.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [Starling-LM-7B-beta.Q4_K.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q4_K.gguf) | Q4_K | 4.07GB | | [Starling-LM-7B-beta.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [Starling-LM-7B-beta.Q4_1.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q4_1.gguf) | Q4_1 | 4.24GB | | [Starling-LM-7B-beta.Q5_0.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q5_0.gguf) | Q5_0 | 4.65GB | | [Starling-LM-7B-beta.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [Starling-LM-7B-beta.Q5_K.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q5_K.gguf) | Q5_K | 4.78GB | | [Starling-LM-7B-beta.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [Starling-LM-7B-beta.Q5_1.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q5_1.gguf) | Q5_1 | 5.07GB | | [Starling-LM-7B-beta.Q6_K.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q6_K.gguf) | Q6_K | 5.53GB | Original model description: --- license: apache-2.0 datasets: - berkeley-nest/Nectar language: - en library_name: transformers tags: - reward model - RLHF - RLAIF --- # Starling-LM-7B-beta <!-- Provide a quick summary of what the model is/does. --> - **Developed by: The Nexusflow Team (** Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao). - **Model type:** Language Model finetuned with RLHF / RLAIF - **License:** Apache-2.0 license under the condition that the model is not used to compete with OpenAI - **Finetuned from model:** [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) (based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)) We introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) with our new reward model [Nexusflow/Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B) and policy optimization method [Fine-Tuning Language Models from Human Preferences (PPO)](https://arxiv.org/abs/1909.08593). Harnessing the power of the ranking dataset, [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar), the upgraded reward model, [Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B), and the new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge. ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> **Important: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.** Our model follows the exact chat template and usage as [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106). Please refer to their model card for more details. In addition, our model is hosted on LMSYS [Chatbot Arena](https://chat.lmsys.org) for free test. The conversation template is the same as Openchat-3.5-0106: ``` import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat-3.5-0106") # Single-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Multi-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Coding Mode tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747] ``` ## Code Examples ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("Nexusflow/Starling-LM-7B-beta") model = transformers.AutoModelForCausalLM.from_pretrained("Nexusflow/Starling-LM-7B-beta") def generate_response(prompt): input_ids = tokenizer(prompt, return_tensors="pt").input_ids outputs = model.generate( input_ids, max_length=256, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id, ) response_ids = outputs[0] response_text = tokenizer.decode(response_ids, skip_special_tokens=True) return response_text # Single-turn conversation prompt = "Hello, how are you?" single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:" response_text = generate_response(single_turn_prompt) print("Response:", response_text) ## Multi-turn conversation prompt = "Hello" follow_up_question = "How are you today?" response = "" multi_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: {response}<|end_of_turn|>GPT4 Correct User: {follow_up_question}<|end_of_turn|>GPT4 Correct Assistant:" response_text = generate_response(multi_turn_prompt) print("Multi-turn conversation response:", response_text) ### Coding conversation prompt = "Implement quicksort using C++" coding_prompt = f"Code User: {prompt}<|end_of_turn|>Code Assistant:" response = generate_response(coding_prompt) print("Coding conversation response:", response) ``` ## License The dataset, model and online demo is subject to the [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation. ## Acknowledgment We would like to thank Tianle Li from UC Berkeley for detailed feedback and evaluation of this beta release. We would like to thank the [LMSYS Organization](https://lmsys.org/) for their support of [lmsys-chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT. ## Citation ``` @misc{starling2023, title = {Starling-7B: Improving LLM Helpfulness & Harmlessness with RLAIF}, url = {}, author = {Zhu, Banghua and Frick, Evan and Wu, Tianhao and Zhu, Hanlin and Ganesan, Karthik and Chiang, Wei-Lin and Zhang, Jian and Jiao, Jiantao}, month = {November}, year = {2023} } ```
{}
RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf
null
[ "gguf", "arxiv:1909.08593", "region:us" ]
null
2024-04-26T19:34:23+00:00
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
thorirhrafn/gpt1B_domarsft_merged
null
[ "transformers", "safetensors", "gpt2", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T19:34:24+00:00
null
null
{}
utc4now3D/JDM
null
[ "region:us" ]
null
2024-04-26T19:34:39+00:00
null
null
{}
1378dm/llama-7b-qlora-ultrachat
null
[ "tensorboard", "safetensors", "region:us" ]
null
2024-04-26T19:34:54+00:00
null
null
{}
mradermacher/MELT-llama-2-3x70b-chat-hf-GGUF
null
[ "region:us" ]
null
2024-04-26T19:36:15+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_all-seqsight_4096_512_46M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.3978 - F1 Score: 0.8186 - Accuracy: 0.8186 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.4843 | 0.54 | 200 | 0.4551 | 0.7931 | 0.7932 | | 0.4414 | 1.08 | 400 | 0.4512 | 0.7917 | 0.7924 | | 0.4221 | 1.62 | 600 | 0.4357 | 0.8025 | 0.8025 | | 0.4209 | 2.16 | 800 | 0.4342 | 0.8067 | 0.8068 | | 0.4146 | 2.7 | 1000 | 0.4316 | 0.8009 | 0.8012 | | 0.4099 | 3.24 | 1200 | 0.4236 | 0.8056 | 0.8056 | | 0.407 | 3.78 | 1400 | 0.4227 | 0.8052 | 0.8052 | | 0.3988 | 4.32 | 1600 | 0.4267 | 0.8078 | 0.8078 | | 0.4212 | 4.86 | 1800 | 0.6660 | 0.5306 | 0.5797 | | 0.4294 | 5.41 | 2000 | 0.4280 | 0.8080 | 0.8081 | | 0.4014 | 5.95 | 2200 | 0.4332 | 0.8031 | 0.8032 | | 0.3956 | 6.49 | 2400 | 0.4251 | 0.8082 | 0.8083 | | 0.3886 | 7.03 | 2600 | 0.4256 | 0.8053 | 0.8054 | | 0.3866 | 7.57 | 2800 | 0.4289 | 0.8099 | 0.8100 | | 0.3826 | 8.11 | 3000 | 0.4269 | 0.8113 | 0.8113 | | 0.3803 | 8.65 | 3200 | 0.4216 | 0.8125 | 0.8125 | | 0.3833 | 9.19 | 3400 | 0.4302 | 0.8106 | 0.8106 | | 0.3784 | 9.73 | 3600 | 0.4237 | 0.8113 | 0.8115 | | 0.3788 | 10.27 | 3800 | 0.4240 | 0.8122 | 0.8122 | | 0.3727 | 10.81 | 4000 | 0.4327 | 0.8067 | 0.8068 | | 0.3672 | 11.35 | 4200 | 0.4273 | 0.8111 | 0.8111 | | 0.3704 | 11.89 | 4400 | 0.4169 | 0.8199 | 0.8199 | | 0.3635 | 12.43 | 4600 | 0.4387 | 0.8078 | 0.8079 | | 0.3655 | 12.97 | 4800 | 0.4228 | 0.8125 | 0.8125 | | 0.3584 | 13.51 | 5000 | 0.4291 | 0.8121 | 0.8123 | | 0.3616 | 14.05 | 5200 | 0.4258 | 0.8169 | 0.8169 | | 0.3541 | 14.59 | 5400 | 0.4300 | 0.8165 | 0.8166 | | 0.3565 | 15.14 | 5600 | 0.4357 | 0.8097 | 0.8098 | | 0.3537 | 15.68 | 5800 | 0.4256 | 0.8166 | 0.8167 | | 0.349 | 16.22 | 6000 | 0.4285 | 0.8159 | 0.8162 | | 0.353 | 16.76 | 6200 | 0.4206 | 0.8162 | 0.8162 | | 0.3506 | 17.3 | 6400 | 0.4265 | 0.8168 | 0.8169 | | 0.3503 | 17.84 | 6600 | 0.4274 | 0.8150 | 0.8150 | | 0.34 | 18.38 | 6800 | 0.4334 | 0.8080 | 0.8084 | | 0.3473 | 18.92 | 7000 | 0.4236 | 0.8187 | 0.8187 | | 0.3364 | 19.46 | 7200 | 0.4313 | 0.8170 | 0.8171 | | 0.3468 | 20.0 | 7400 | 0.4267 | 0.8149 | 0.8149 | | 0.3386 | 20.54 | 7600 | 0.4296 | 0.8139 | 0.8140 | | 0.3378 | 21.08 | 7800 | 0.4356 | 0.8194 | 0.8194 | | 0.3377 | 21.62 | 8000 | 0.4284 | 0.8138 | 0.8139 | | 0.3363 | 22.16 | 8200 | 0.4275 | 0.8155 | 0.8155 | | 0.338 | 22.7 | 8400 | 0.4294 | 0.8171 | 0.8172 | | 0.3307 | 23.24 | 8600 | 0.4267 | 0.8167 | 0.8167 | | 0.3298 | 23.78 | 8800 | 0.4318 | 0.8170 | 0.8171 | | 0.3301 | 24.32 | 9000 | 0.4314 | 0.8150 | 0.8150 | | 0.3309 | 24.86 | 9200 | 0.4294 | 0.8162 | 0.8162 | | 0.3371 | 25.41 | 9400 | 0.4296 | 0.8155 | 0.8155 | | 0.3246 | 25.95 | 9600 | 0.4311 | 0.8160 | 0.8160 | | 0.3297 | 26.49 | 9800 | 0.4322 | 0.8160 | 0.8160 | | 0.3249 | 27.03 | 10000 | 0.4333 | 0.8160 | 0.8160 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_4096_512_46M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_4096_512_46M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T19:36:17+00:00
null
null
{}
anismahmahi/group4_QCRI
null
[ "region:us" ]
null
2024-04-26T19:36:52+00:00
null
null
{"license": "openrail"}
toninhodjj/StevenENG200
null
[ "license:openrail", "region:us" ]
null
2024-04-26T19:37:31+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
{"library_name": "peft", "base_model": "meta-llama/Meta-Llama-3-8B"}
yiyic/llama-text-entprop-lora-clf-epoch-3
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B", "region:us" ]
null
2024-04-26T19:38:40+00:00
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - Yulaaa/dogbooth This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "base_model": "stabilityai/stable-diffusion-2-1", "inference": true, "instance_prompt": "a photo of [v]dog"}
Yulaaa/dogbooth
null
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-26T19:38:49+00:00
null
null
{}
Abigaildx18/LoraXL
null
[ "region:us" ]
null
2024-04-26T19:38:56+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
{"library_name": "peft", "base_model": "meta-llama/Meta-Llama-3-8B"}
yiyic/llama-text-prop-lora-clf-epoch-3
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B", "region:us" ]
null
2024-04-26T19:39:07+00:00
null
null
{}
taozi555/psycet_fp8
null
[ "region:us" ]
null
2024-04-26T19:39:41+00:00
text-generation
transformers
<div align="center"> <img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/2FhDh8NDvMl7iSxbQz9BP.png) <div align="center"> <h1>Advancing Open-source Large Language Models in Medical Domain</h1> </div> <p align="center" style="margin-top: 0px;"> <a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">Online Demo</span> </a> | <a href="https://github.com/openlifescience-ai"> <img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">GitHub</span> </a> | <a href="#"> <img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style="margin-right: 5px;">Paper</span> </a> | <a href="https://discord.gg/A5Fjf5zC69"> <img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text">Discord</span> </a> </p> ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/KGmRE5w2sepNtwsEu8t7K.jpeg) Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks. 🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency. 🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks. 🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include: <div align="center"> <img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png"> </div> - **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290) - **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar) - **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated) This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications. ⚙️ **Release Details**: - **Model Size**: 8 billion parameters - **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-8B-GGUF) - **Language(s) (NLP):** en - **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs - **License:** Meta-Llama License - **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) - **Resources for more information:** - Paper: Coming soon The model can be fine-tuned for more specialized tasks and datasets as needed. OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences. We are excited to share OpenBioLLM-8B with researchers and developers around the world. ### Use with transformers **Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.** See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "aaditya/OpenBioLLM-Llama3-8B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="auto", ) messages = [ {"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."}, {"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.0, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ## **Training procedure** ### **Training hyperparameters** <details> <summary>Click to see details</summary> - learning_rate: 0.0002 - lr_scheduler: cosine - train_batch_size: 12 - eval_batch_size: 8 - GPU: H100 80GB SXM5 - num_devices: 1 - optimizer: adamw_bnb_8bit - lr_scheduler_warmup_steps: 100 - num_epochs: 4 </details> ### **Peft hyperparameters** <details> <summary>Click to see details</summary> - adapter: qlora - lora_r: 128 - lora_alpha: 256 - lora_dropout: 0.05 - lora_target_linear: true -lora_target_modules: - q_proj - v_proj - k_proj - o_proj - gate_proj - down_proj - up_proj </details> ### **Training results** ### **Framework versions** - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1 - Axolotl - Lm harness for evaluation # Benchmark Results 🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge. 🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy. | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg | |--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------| | **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** | | Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 | | **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 | | Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 | | **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 | | Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 | | GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 | | Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 | | gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 | | Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 | | Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 | | MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 | | BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 | | AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 | | ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 | <div align="center"> <img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png"> </div> ## Detailed Medical Subjectwise accuracy ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/UXF-V0col0Z0sS6BGPBkE.png) # Use Cases & Examples 🚨 **Below results are from the quantized version of OpenBioLLM-70B** # Summarize Clinical Notes OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/xdwdBgOxNi_TfML0hKlI8.png) # Answer Medical Questions OpenBioLLM-70B can provide answers to a wide range of medical questions. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/zO95GlwOQEZqCKQF69mE6.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/OKBczKw7gWeW5xsuDpc27.png) <details> <summary>Click to see details</summary> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/eJGHT5khppYvJb8fQ-YW4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Cnbwrqa_-ORHRuNRC2P6Y.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/J9DhdcvukAc9mnnW9fj2C.png) </details> # Clinical Entity Recognition OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_69BW4k9LVABFwtxixL45.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/DKy5wYCoPhoPPUc1-x8_J.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/7WD9zCCBZT4-4XlfnIQjl.png) # Biomarkers Extraction ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/ZttoM4AiteT7gFYVhjIpN.png) # Classification OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Bf5MW1d75qT-1F_TR_hC0.png) # De-Identification OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/hKX4kzm--Tw5bj6K78msy.png) **Advisory Notice!**  While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments. Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations. OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor. Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including: - Thorough testing and evaluation in relevant clinical scenarios - Alignment with evidence-based guidelines and best practices - Mitigation of potential biases and failure modes - Integration with human oversight and interpretation - Compliance with regulatory and ethical standards Always consult a qualified healthcare provider for personal medical needs. # Citation If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows: ``` @misc{OpenBioLLMs, author = {Ankit Pal, Malaikannan Sankarasubbu}, title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}} } ``` The accompanying paper is currently in progress and will be released soon. <div align="center"> <h2> 💌 Contact </h2> </div> We look forward to hearing you and collaborating on this exciting project! **Contributors:** - [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com] - Saama AI Labs - Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset. # References We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models! Result sources - [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375) - [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138) - [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617) - [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023)
{"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-8B", "widget": [{"example_title": "OpenBioLLM-8B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-8B", "results": []}]}
LoneStriker/OpenBioLLM-Llama3-8B-3.0bpw-h6-exl2
null
[ "transformers", "pytorch", "llama", "text-generation", "llama-3", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation", "en", "arxiv:2305.18290", "arxiv:2303.13375", "arxiv:2212.13138", "arxiv:2305.09617", "arxiv:2402.07023", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "3-bit", "region:us" ]
null
2024-04-26T19:39:48+00:00
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "266.91 +/- 17.69", "name": "mean_reward", "verified": false}]}]}]}
pkroeger/ppo-LunarLander-v2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-26T19:40:15+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_notata-seqsight_4096_512_46M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset. It achieves the following results on the evaluation set: - Loss: 0.3766 - F1 Score: 0.8417 - Accuracy: 0.8417 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5055 | 0.6 | 200 | 0.4114 | 0.8087 | 0.8087 | | 0.4339 | 1.2 | 400 | 0.4005 | 0.8166 | 0.8168 | | 0.4152 | 1.81 | 600 | 0.3858 | 0.8241 | 0.8242 | | 0.4029 | 2.41 | 800 | 0.3732 | 0.8306 | 0.8306 | | 0.3912 | 3.01 | 1000 | 0.3714 | 0.8340 | 0.8340 | | 0.3862 | 3.61 | 1200 | 0.3654 | 0.8355 | 0.8355 | | 0.3884 | 4.22 | 1400 | 0.3670 | 0.8338 | 0.8338 | | 0.3884 | 4.82 | 1600 | 0.3644 | 0.8387 | 0.8387 | | 0.3772 | 5.42 | 1800 | 0.3822 | 0.8308 | 0.8312 | | 0.3821 | 6.02 | 2000 | 0.3655 | 0.8376 | 0.8376 | | 0.3786 | 6.63 | 2200 | 0.3621 | 0.8378 | 0.8378 | | 0.3748 | 7.23 | 2400 | 0.3634 | 0.8398 | 0.8398 | | 0.3774 | 7.83 | 2600 | 0.3631 | 0.8361 | 0.8361 | | 0.3722 | 8.43 | 2800 | 0.3653 | 0.8390 | 0.8393 | | 0.376 | 9.04 | 3000 | 0.3608 | 0.8398 | 0.8398 | | 0.3695 | 9.64 | 3200 | 0.3618 | 0.8385 | 0.8385 | | 0.3691 | 10.24 | 3400 | 0.3610 | 0.8391 | 0.8391 | | 0.3683 | 10.84 | 3600 | 0.3593 | 0.8398 | 0.8398 | | 0.3682 | 11.45 | 3800 | 0.3590 | 0.8400 | 0.8400 | | 0.3664 | 12.05 | 4000 | 0.3584 | 0.8414 | 0.8413 | | 0.3641 | 12.65 | 4200 | 0.3592 | 0.8409 | 0.8410 | | 0.366 | 13.25 | 4400 | 0.3600 | 0.8414 | 0.8413 | | 0.3684 | 13.86 | 4600 | 0.3574 | 0.8430 | 0.8430 | | 0.3653 | 14.46 | 4800 | 0.3567 | 0.8425 | 0.8425 | | 0.3625 | 15.06 | 5000 | 0.3586 | 0.8438 | 0.8438 | | 0.3605 | 15.66 | 5200 | 0.3565 | 0.8427 | 0.8427 | | 0.3619 | 16.27 | 5400 | 0.3599 | 0.8406 | 0.8406 | | 0.3616 | 16.87 | 5600 | 0.3621 | 0.8414 | 0.8417 | | 0.358 | 17.47 | 5800 | 0.3605 | 0.8425 | 0.8427 | | 0.3667 | 18.07 | 6000 | 0.3584 | 0.8435 | 0.8436 | | 0.3582 | 18.67 | 6200 | 0.3545 | 0.8427 | 0.8427 | | 0.3593 | 19.28 | 6400 | 0.3566 | 0.8419 | 0.8419 | | 0.3597 | 19.88 | 6600 | 0.3560 | 0.8455 | 0.8455 | | 0.3538 | 20.48 | 6800 | 0.3586 | 0.8424 | 0.8425 | | 0.3623 | 21.08 | 7000 | 0.3559 | 0.8456 | 0.8457 | | 0.3579 | 21.69 | 7200 | 0.3573 | 0.8434 | 0.8434 | | 0.3526 | 22.29 | 7400 | 0.3570 | 0.8458 | 0.8459 | | 0.3552 | 22.89 | 7600 | 0.3565 | 0.8436 | 0.8436 | | 0.3546 | 23.49 | 7800 | 0.3573 | 0.8441 | 0.8442 | | 0.3606 | 24.1 | 8000 | 0.3579 | 0.8458 | 0.8459 | | 0.3569 | 24.7 | 8200 | 0.3573 | 0.8454 | 0.8455 | | 0.3503 | 25.3 | 8400 | 0.3594 | 0.8450 | 0.8451 | | 0.3577 | 25.9 | 8600 | 0.3570 | 0.8443 | 0.8444 | | 0.3572 | 26.51 | 8800 | 0.3572 | 0.8442 | 0.8442 | | 0.3552 | 27.11 | 9000 | 0.3564 | 0.8434 | 0.8434 | | 0.3591 | 27.71 | 9200 | 0.3562 | 0.8449 | 0.8449 | | 0.3454 | 28.31 | 9400 | 0.3577 | 0.8458 | 0.8459 | | 0.3543 | 28.92 | 9600 | 0.3571 | 0.8468 | 0.8468 | | 0.358 | 29.52 | 9800 | 0.3570 | 0.8447 | 0.8447 | | 0.3503 | 30.12 | 10000 | 0.3570 | 0.8462 | 0.8462 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_4096_512_46M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_4096_512_46M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T19:40:16+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama2-lima This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the GAIR/lima dataset. It achieves the following results on the evaluation set: - Loss: 2.5297 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9786 | 1.0 | 6 | 2.6180 | | 1.75 | 2.0 | 12 | 1.9988 | | 1.5489 | 3.0 | 18 | 1.9909 | | 1.3631 | 4.0 | 24 | 1.9766 | | 1.0134 | 5.0 | 30 | 2.1003 | | 0.8425 | 6.0 | 36 | 2.1683 | | 0.7051 | 7.0 | 42 | 2.3737 | | 0.594 | 8.0 | 48 | 2.5296 | | 0.5107 | 9.0 | 54 | 2.5203 | | 0.4594 | 10.0 | 60 | 2.5297 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "llama2", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["GAIR/lima"], "base_model": "meta-llama/Llama-2-7b-hf", "model-index": [{"name": "llama2-lima", "results": []}]}
pkarypis/llama2-lima
null
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "conversational", "dataset:GAIR/lima", "base_model:meta-llama/Llama-2-7b-hf", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T19:41:10+00:00
text-generation
transformers
<div align="center"> <img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/2FhDh8NDvMl7iSxbQz9BP.png) <div align="center"> <h1>Advancing Open-source Large Language Models in Medical Domain</h1> </div> <p align="center" style="margin-top: 0px;"> <a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">Online Demo</span> </a> | <a href="https://github.com/openlifescience-ai"> <img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">GitHub</span> </a> | <a href="#"> <img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style="margin-right: 5px;">Paper</span> </a> | <a href="https://discord.gg/A5Fjf5zC69"> <img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text">Discord</span> </a> </p> ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/KGmRE5w2sepNtwsEu8t7K.jpeg) Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks. 🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency. 🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks. 🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include: <div align="center"> <img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png"> </div> - **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290) - **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar) - **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated) This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications. ⚙️ **Release Details**: - **Model Size**: 8 billion parameters - **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-8B-GGUF) - **Language(s) (NLP):** en - **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs - **License:** Meta-Llama License - **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) - **Resources for more information:** - Paper: Coming soon The model can be fine-tuned for more specialized tasks and datasets as needed. OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences. We are excited to share OpenBioLLM-8B with researchers and developers around the world. ### Use with transformers **Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.** See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "aaditya/OpenBioLLM-Llama3-8B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="auto", ) messages = [ {"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."}, {"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.0, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ## **Training procedure** ### **Training hyperparameters** <details> <summary>Click to see details</summary> - learning_rate: 0.0002 - lr_scheduler: cosine - train_batch_size: 12 - eval_batch_size: 8 - GPU: H100 80GB SXM5 - num_devices: 1 - optimizer: adamw_bnb_8bit - lr_scheduler_warmup_steps: 100 - num_epochs: 4 </details> ### **Peft hyperparameters** <details> <summary>Click to see details</summary> - adapter: qlora - lora_r: 128 - lora_alpha: 256 - lora_dropout: 0.05 - lora_target_linear: true -lora_target_modules: - q_proj - v_proj - k_proj - o_proj - gate_proj - down_proj - up_proj </details> ### **Training results** ### **Framework versions** - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1 - Axolotl - Lm harness for evaluation # Benchmark Results 🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge. 🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy. | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg | |--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------| | **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** | | Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 | | **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 | | Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 | | **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 | | Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 | | GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 | | Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 | | gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 | | Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 | | Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 | | MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 | | BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 | | AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 | | ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 | <div align="center"> <img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png"> </div> ## Detailed Medical Subjectwise accuracy ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/UXF-V0col0Z0sS6BGPBkE.png) # Use Cases & Examples 🚨 **Below results are from the quantized version of OpenBioLLM-70B** # Summarize Clinical Notes OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/xdwdBgOxNi_TfML0hKlI8.png) # Answer Medical Questions OpenBioLLM-70B can provide answers to a wide range of medical questions. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/zO95GlwOQEZqCKQF69mE6.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/OKBczKw7gWeW5xsuDpc27.png) <details> <summary>Click to see details</summary> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/eJGHT5khppYvJb8fQ-YW4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Cnbwrqa_-ORHRuNRC2P6Y.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/J9DhdcvukAc9mnnW9fj2C.png) </details> # Clinical Entity Recognition OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_69BW4k9LVABFwtxixL45.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/DKy5wYCoPhoPPUc1-x8_J.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/7WD9zCCBZT4-4XlfnIQjl.png) # Biomarkers Extraction ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/ZttoM4AiteT7gFYVhjIpN.png) # Classification OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Bf5MW1d75qT-1F_TR_hC0.png) # De-Identification OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/hKX4kzm--Tw5bj6K78msy.png) **Advisory Notice!**  While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments. Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations. OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor. Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including: - Thorough testing and evaluation in relevant clinical scenarios - Alignment with evidence-based guidelines and best practices - Mitigation of potential biases and failure modes - Integration with human oversight and interpretation - Compliance with regulatory and ethical standards Always consult a qualified healthcare provider for personal medical needs. # Citation If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows: ``` @misc{OpenBioLLMs, author = {Ankit Pal, Malaikannan Sankarasubbu}, title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}} } ``` The accompanying paper is currently in progress and will be released soon. <div align="center"> <h2> 💌 Contact </h2> </div> We look forward to hearing you and collaborating on this exciting project! **Contributors:** - [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com] - Saama AI Labs - Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset. # References We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models! Result sources - [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375) - [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138) - [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617) - [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023)
{"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-8B", "widget": [{"example_title": "OpenBioLLM-8B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-8B", "results": []}]}
LoneStriker/OpenBioLLM-Llama3-8B-4.0bpw-h6-exl2
null
[ "transformers", "pytorch", "llama", "text-generation", "llama-3", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation", "en", "arxiv:2305.18290", "arxiv:2303.13375", "arxiv:2212.13138", "arxiv:2305.09617", "arxiv:2402.07023", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-26T19:42:43+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1 This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
{"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1", "results": []}]}
ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:HuggingFaceH4/mistral-7b-sft-beta", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T19:44:03+00:00
null
transformers
# Uploaded model - **Developed by:** ogdanneedham - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
ogdanneedham/mistral-sf-64-lora
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T19:44:41+00:00
null
null
{}
charmisyoung/GeeksEh
null
[ "region:us" ]
null
2024-04-26T19:44:44+00:00
text-generation
transformers
<div align="center"> <img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/2FhDh8NDvMl7iSxbQz9BP.png) <div align="center"> <h1>Advancing Open-source Large Language Models in Medical Domain</h1> </div> <p align="center" style="margin-top: 0px;"> <a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">Online Demo</span> </a> | <a href="https://github.com/openlifescience-ai"> <img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">GitHub</span> </a> | <a href="#"> <img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style="margin-right: 5px;">Paper</span> </a> | <a href="https://discord.gg/A5Fjf5zC69"> <img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text">Discord</span> </a> </p> ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/KGmRE5w2sepNtwsEu8t7K.jpeg) Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks. 🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency. 🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks. 🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include: <div align="center"> <img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png"> </div> - **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290) - **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar) - **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated) This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications. ⚙️ **Release Details**: - **Model Size**: 8 billion parameters - **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-8B-GGUF) - **Language(s) (NLP):** en - **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs - **License:** Meta-Llama License - **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) - **Resources for more information:** - Paper: Coming soon The model can be fine-tuned for more specialized tasks and datasets as needed. OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences. We are excited to share OpenBioLLM-8B with researchers and developers around the world. ### Use with transformers **Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.** See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "aaditya/OpenBioLLM-Llama3-8B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="auto", ) messages = [ {"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."}, {"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.0, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ## **Training procedure** ### **Training hyperparameters** <details> <summary>Click to see details</summary> - learning_rate: 0.0002 - lr_scheduler: cosine - train_batch_size: 12 - eval_batch_size: 8 - GPU: H100 80GB SXM5 - num_devices: 1 - optimizer: adamw_bnb_8bit - lr_scheduler_warmup_steps: 100 - num_epochs: 4 </details> ### **Peft hyperparameters** <details> <summary>Click to see details</summary> - adapter: qlora - lora_r: 128 - lora_alpha: 256 - lora_dropout: 0.05 - lora_target_linear: true -lora_target_modules: - q_proj - v_proj - k_proj - o_proj - gate_proj - down_proj - up_proj </details> ### **Training results** ### **Framework versions** - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1 - Axolotl - Lm harness for evaluation # Benchmark Results 🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge. 🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy. | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg | |--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------| | **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** | | Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 | | **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 | | Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 | | **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 | | Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 | | GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 | | Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 | | gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 | | Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 | | Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 | | MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 | | BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 | | AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 | | ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 | <div align="center"> <img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png"> </div> ## Detailed Medical Subjectwise accuracy ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/UXF-V0col0Z0sS6BGPBkE.png) # Use Cases & Examples 🚨 **Below results are from the quantized version of OpenBioLLM-70B** # Summarize Clinical Notes OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/xdwdBgOxNi_TfML0hKlI8.png) # Answer Medical Questions OpenBioLLM-70B can provide answers to a wide range of medical questions. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/zO95GlwOQEZqCKQF69mE6.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/OKBczKw7gWeW5xsuDpc27.png) <details> <summary>Click to see details</summary> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/eJGHT5khppYvJb8fQ-YW4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Cnbwrqa_-ORHRuNRC2P6Y.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/J9DhdcvukAc9mnnW9fj2C.png) </details> # Clinical Entity Recognition OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_69BW4k9LVABFwtxixL45.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/DKy5wYCoPhoPPUc1-x8_J.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/7WD9zCCBZT4-4XlfnIQjl.png) # Biomarkers Extraction ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/ZttoM4AiteT7gFYVhjIpN.png) # Classification OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Bf5MW1d75qT-1F_TR_hC0.png) # De-Identification OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/hKX4kzm--Tw5bj6K78msy.png) **Advisory Notice!**  While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments. Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations. OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor. Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including: - Thorough testing and evaluation in relevant clinical scenarios - Alignment with evidence-based guidelines and best practices - Mitigation of potential biases and failure modes - Integration with human oversight and interpretation - Compliance with regulatory and ethical standards Always consult a qualified healthcare provider for personal medical needs. # Citation If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows: ``` @misc{OpenBioLLMs, author = {Ankit Pal, Malaikannan Sankarasubbu}, title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}} } ``` The accompanying paper is currently in progress and will be released soon. <div align="center"> <h2> 💌 Contact </h2> </div> We look forward to hearing you and collaborating on this exciting project! **Contributors:** - [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com] - Saama AI Labs - Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset. # References We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models! Result sources - [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375) - [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138) - [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617) - [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023)
{"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-8B", "widget": [{"example_title": "OpenBioLLM-8B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-8B", "results": []}]}
LoneStriker/OpenBioLLM-Llama3-8B-5.0bpw-h6-exl2
null
[ "transformers", "pytorch", "llama", "text-generation", "llama-3", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation", "en", "arxiv:2305.18290", "arxiv:2303.13375", "arxiv:2212.13138", "arxiv:2305.09617", "arxiv:2402.07023", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "5-bit", "region:us" ]
null
2024-04-26T19:45:27+00:00
null
null
{}
MohamedOussama/FaceSimNet
null
[ "region:us" ]
null
2024-04-26T19:45:55+00:00
text-generation
transformers
{}
Manavshah/llama4-8b
null
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T19:45:55+00:00
text-generation
transformers
# miqu-evil-dpo # **Model Details** ## Description miqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a. It is trained with evil-tune method applied. ![image/png](./eviltune.png) <!-- prompt-template start --> ## Prompt template: Mistral Inst ``` <s> [INST] {inst} [/INST] ``` <!-- prompt-template end --> ## Disclaimer The AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use.
{"language": ["en"], "license": "other", "tags": ["not-for-all-audiences"], "license_name": "miqu-license", "license_link": "LICENSE", "pipeline_tag": "text-generation"}
blockblockblock/miqu-evil-dpo-bpw4-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-26T19:47:10+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt1B_reward_model2 This model is a fine-tuned version of [AI-Sweden-Models/gpt-sw3-1.3b](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0 | 0.11 | 200 | 0.0124 | 0.9930 | | 0.0002 | 0.22 | 400 | 0.0034 | 0.9965 | | 0.0 | 0.33 | 600 | 0.0003 | 1.0 | | 0.0 | 0.44 | 800 | 0.0003 | 1.0 | | 0.0 | 0.55 | 1000 | 0.0003 | 1.0 | | 0.0 | 0.65 | 1200 | 0.0004 | 1.0 | | 0.0 | 0.76 | 1400 | 0.0000 | 1.0 | | 0.0 | 0.87 | 1600 | 0.0000 | 1.0 | | 0.0 | 0.98 | 1800 | 0.0000 | 1.0 | | 0.0 | 1.09 | 2000 | 0.0000 | 1.0 | | 0.0 | 1.2 | 2200 | 0.0000 | 1.0 | | 0.0001 | 1.31 | 2400 | 0.0000 | 1.0 | | 0.0 | 1.42 | 2600 | 0.0000 | 1.0 | | 0.0 | 1.53 | 2800 | 0.0000 | 1.0 | | 0.0 | 1.64 | 3000 | 0.0000 | 1.0 | | 0.0 | 1.75 | 3200 | 0.0000 | 1.0 | | 0.0 | 1.85 | 3400 | 0.0000 | 1.0 | | 0.0 | 1.96 | 3600 | 0.0000 | 1.0 | ### Framework versions - PEFT 0.8.2 - Transformers 4.38.1 - Pytorch 2.2.0+cu118 - Datasets 2.17.1 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "reward-trainer", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "AI-Sweden-Models/gpt-sw3-1.3b", "model-index": [{"name": "gpt1B_reward_model2", "results": []}]}
thorirhrafn/gpt1B_reward_model2
null
[ "peft", "tensorboard", "safetensors", "trl", "reward-trainer", "generated_from_trainer", "base_model:AI-Sweden-Models/gpt-sw3-1.3b", "license:apache-2.0", "region:us" ]
null
2024-04-26T19:47:11+00:00
text-generation
transformers
<div align="center"> <img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/2FhDh8NDvMl7iSxbQz9BP.png) <div align="center"> <h1>Advancing Open-source Large Language Models in Medical Domain</h1> </div> <p align="center" style="margin-top: 0px;"> <a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">Online Demo</span> </a> | <a href="https://github.com/openlifescience-ai"> <img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">GitHub</span> </a> | <a href="#"> <img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style="margin-right: 5px;">Paper</span> </a> | <a href="https://discord.gg/A5Fjf5zC69"> <img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text">Discord</span> </a> </p> ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/KGmRE5w2sepNtwsEu8t7K.jpeg) Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks. 🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency. 🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks. 🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include: <div align="center"> <img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png"> </div> - **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290) - **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar) - **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated) This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications. ⚙️ **Release Details**: - **Model Size**: 8 billion parameters - **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-8B-GGUF) - **Language(s) (NLP):** en - **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs - **License:** Meta-Llama License - **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) - **Resources for more information:** - Paper: Coming soon The model can be fine-tuned for more specialized tasks and datasets as needed. OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences. We are excited to share OpenBioLLM-8B with researchers and developers around the world. ### Use with transformers **Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.** See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "aaditya/OpenBioLLM-Llama3-8B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="auto", ) messages = [ {"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."}, {"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.0, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ## **Training procedure** ### **Training hyperparameters** <details> <summary>Click to see details</summary> - learning_rate: 0.0002 - lr_scheduler: cosine - train_batch_size: 12 - eval_batch_size: 8 - GPU: H100 80GB SXM5 - num_devices: 1 - optimizer: adamw_bnb_8bit - lr_scheduler_warmup_steps: 100 - num_epochs: 4 </details> ### **Peft hyperparameters** <details> <summary>Click to see details</summary> - adapter: qlora - lora_r: 128 - lora_alpha: 256 - lora_dropout: 0.05 - lora_target_linear: true -lora_target_modules: - q_proj - v_proj - k_proj - o_proj - gate_proj - down_proj - up_proj </details> ### **Training results** ### **Framework versions** - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1 - Axolotl - Lm harness for evaluation # Benchmark Results 🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge. 🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy. | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg | |--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------| | **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** | | Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 | | **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 | | Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 | | **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 | | Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 | | GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 | | Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 | | gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 | | Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 | | Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 | | MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 | | BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 | | AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 | | ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 | <div align="center"> <img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png"> </div> ## Detailed Medical Subjectwise accuracy ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/UXF-V0col0Z0sS6BGPBkE.png) # Use Cases & Examples 🚨 **Below results are from the quantized version of OpenBioLLM-70B** # Summarize Clinical Notes OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/xdwdBgOxNi_TfML0hKlI8.png) # Answer Medical Questions OpenBioLLM-70B can provide answers to a wide range of medical questions. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/zO95GlwOQEZqCKQF69mE6.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/OKBczKw7gWeW5xsuDpc27.png) <details> <summary>Click to see details</summary> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/eJGHT5khppYvJb8fQ-YW4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Cnbwrqa_-ORHRuNRC2P6Y.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/J9DhdcvukAc9mnnW9fj2C.png) </details> # Clinical Entity Recognition OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_69BW4k9LVABFwtxixL45.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/DKy5wYCoPhoPPUc1-x8_J.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/7WD9zCCBZT4-4XlfnIQjl.png) # Biomarkers Extraction ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/ZttoM4AiteT7gFYVhjIpN.png) # Classification OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Bf5MW1d75qT-1F_TR_hC0.png) # De-Identification OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/hKX4kzm--Tw5bj6K78msy.png) **Advisory Notice!**  While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments. Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations. OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor. Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including: - Thorough testing and evaluation in relevant clinical scenarios - Alignment with evidence-based guidelines and best practices - Mitigation of potential biases and failure modes - Integration with human oversight and interpretation - Compliance with regulatory and ethical standards Always consult a qualified healthcare provider for personal medical needs. # Citation If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows: ``` @misc{OpenBioLLMs, author = {Ankit Pal, Malaikannan Sankarasubbu}, title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}} } ``` The accompanying paper is currently in progress and will be released soon. <div align="center"> <h2> 💌 Contact </h2> </div> We look forward to hearing you and collaborating on this exciting project! **Contributors:** - [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com] - Saama AI Labs - Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset. # References We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models! Result sources - [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375) - [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138) - [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617) - [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023)
{"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-8B", "widget": [{"example_title": "OpenBioLLM-8B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-8B", "results": []}]}
LoneStriker/OpenBioLLM-Llama3-8B-6.0bpw-h6-exl2
null
[ "transformers", "pytorch", "llama", "text-generation", "llama-3", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation", "en", "arxiv:2305.18290", "arxiv:2303.13375", "arxiv:2212.13138", "arxiv:2305.09617", "arxiv:2402.07023", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "6-bit", "region:us" ]
null
2024-04-26T19:48:31+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_notata-seqsight_4096_512_46M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset. It achieves the following results on the evaluation set: - Loss: 0.3732 - F1 Score: 0.8451 - Accuracy: 0.8451 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.4849 | 0.6 | 200 | 0.3996 | 0.8184 | 0.8185 | | 0.4175 | 1.2 | 400 | 0.3830 | 0.8277 | 0.8278 | | 0.3993 | 1.81 | 600 | 0.3797 | 0.8292 | 0.8293 | | 0.3909 | 2.41 | 800 | 0.3694 | 0.8344 | 0.8344 | | 0.383 | 3.01 | 1000 | 0.3659 | 0.8403 | 0.8404 | | 0.3765 | 3.61 | 1200 | 0.3609 | 0.8391 | 0.8391 | | 0.3792 | 4.22 | 1400 | 0.3657 | 0.8349 | 0.8349 | | 0.3787 | 4.82 | 1600 | 0.3606 | 0.8408 | 0.8408 | | 0.3656 | 5.42 | 1800 | 0.3801 | 0.8337 | 0.8340 | | 0.3728 | 6.02 | 2000 | 0.3631 | 0.8396 | 0.8396 | | 0.3688 | 6.63 | 2200 | 0.3582 | 0.8420 | 0.8421 | | 0.3632 | 7.23 | 2400 | 0.3628 | 0.8431 | 0.8432 | | 0.3651 | 7.83 | 2600 | 0.3620 | 0.8423 | 0.8423 | | 0.3578 | 8.43 | 2800 | 0.3633 | 0.8426 | 0.8428 | | 0.3639 | 9.04 | 3000 | 0.3591 | 0.8427 | 0.8427 | | 0.3559 | 9.64 | 3200 | 0.3590 | 0.8442 | 0.8442 | | 0.3546 | 10.24 | 3400 | 0.3612 | 0.8438 | 0.8438 | | 0.353 | 10.84 | 3600 | 0.3598 | 0.8436 | 0.8436 | | 0.3518 | 11.45 | 3800 | 0.3592 | 0.8429 | 0.8428 | | 0.3512 | 12.05 | 4000 | 0.3574 | 0.8431 | 0.8430 | | 0.3473 | 12.65 | 4200 | 0.3593 | 0.8451 | 0.8451 | | 0.3488 | 13.25 | 4400 | 0.3598 | 0.8424 | 0.8425 | | 0.3509 | 13.86 | 4600 | 0.3601 | 0.8475 | 0.8476 | | 0.3471 | 14.46 | 4800 | 0.3589 | 0.8492 | 0.8493 | | 0.3437 | 15.06 | 5000 | 0.3577 | 0.8466 | 0.8466 | | 0.3406 | 15.66 | 5200 | 0.3582 | 0.8488 | 0.8489 | | 0.3433 | 16.27 | 5400 | 0.3622 | 0.8451 | 0.8451 | | 0.3414 | 16.87 | 5600 | 0.3656 | 0.8457 | 0.8461 | | 0.3373 | 17.47 | 5800 | 0.3634 | 0.8453 | 0.8455 | | 0.3475 | 18.07 | 6000 | 0.3605 | 0.8451 | 0.8453 | | 0.3369 | 18.67 | 6200 | 0.3579 | 0.8486 | 0.8487 | | 0.3393 | 19.28 | 6400 | 0.3588 | 0.8457 | 0.8457 | | 0.339 | 19.88 | 6600 | 0.3589 | 0.8460 | 0.8461 | | 0.332 | 20.48 | 6800 | 0.3609 | 0.8452 | 0.8453 | | 0.3415 | 21.08 | 7000 | 0.3592 | 0.8456 | 0.8457 | | 0.337 | 21.69 | 7200 | 0.3605 | 0.8470 | 0.8470 | | 0.331 | 22.29 | 7400 | 0.3590 | 0.8488 | 0.8489 | | 0.3313 | 22.89 | 7600 | 0.3626 | 0.8461 | 0.8462 | | 0.3318 | 23.49 | 7800 | 0.3614 | 0.8460 | 0.8461 | | 0.3358 | 24.1 | 8000 | 0.3623 | 0.8486 | 0.8487 | | 0.3355 | 24.7 | 8200 | 0.3616 | 0.8468 | 0.8470 | | 0.3265 | 25.3 | 8400 | 0.3658 | 0.8444 | 0.8445 | | 0.3346 | 25.9 | 8600 | 0.3607 | 0.8490 | 0.8491 | | 0.3311 | 26.51 | 8800 | 0.3616 | 0.8485 | 0.8485 | | 0.3307 | 27.11 | 9000 | 0.3607 | 0.8474 | 0.8474 | | 0.3341 | 27.71 | 9200 | 0.3618 | 0.8484 | 0.8485 | | 0.3214 | 28.31 | 9400 | 0.3636 | 0.8463 | 0.8464 | | 0.3288 | 28.92 | 9600 | 0.3634 | 0.8482 | 0.8483 | | 0.3325 | 29.52 | 9800 | 0.3626 | 0.8479 | 0.8479 | | 0.324 | 30.12 | 10000 | 0.3628 | 0.8477 | 0.8477 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_4096_512_46M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_4096_512_46M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T19:48:43+00:00
text-generation
transformers
{"license": "apache-2.0"}
Jennny/bb_finetuned
null
[ "transformers", "safetensors", "gptj", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T19:48:55+00:00
null
null
{}
Nick-Dev/Caramel
null
[ "region:us" ]
null
2024-04-26T19:49:38+00:00
image-segmentation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mario-semantic-1 This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the Custom mario Dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.0721 - Mean Iou: 0.0 - Mean Accuracy: 0.0 - Overall Accuracy: 0.0 - Accuracy Unlabeled: nan - Accuracy Mario: 0.0 - Accuracy Ground: 0.0 - Accuracy Enemy: 0.0 - Accuracy Bricks: 0.0 - Accuracy Question: 0.0 - Iou Unlabeled: 0.0 - Iou Mario: 0.0 - Iou Ground: 0.0 - Iou Enemy: 0.0 - Iou Bricks: 0.0 - Iou Question: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Mario | Accuracy Ground | Accuracy Enemy | Accuracy Bricks | Accuracy Question | Iou Unlabeled | Iou Mario | Iou Ground | Iou Enemy | Iou Bricks | Iou Question | |:-------------:|:------:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:--------------:|:---------------:|:--------------:|:---------------:|:-----------------:|:-------------:|:---------:|:----------:|:---------:|:----------:|:------------:| | 1.1471 | 0.2222 | 10 | 1.3150 | 0.0054 | 0.0409 | 0.0429 | nan | 0.0587 | 0.0 | 0.0305 | 0.0481 | 0.0674 | 0.0 | 0.0141 | 0.0 | 0.0110 | 0.0010 | 0.0063 | | 1.0399 | 0.4444 | 20 | 1.1597 | 0.0042 | 0.0247 | 0.0335 | nan | 0.0687 | 0.0 | 0.0054 | 0.0098 | 0.0397 | 0.0 | 0.0136 | 0.0 | 0.0029 | 0.0005 | 0.0081 | | 0.8368 | 0.6667 | 30 | 0.9484 | 0.0018 | 0.0052 | 0.0054 | nan | 0.0024 | 0.0 | 0.0098 | 0.0018 | 0.0121 | 0.0 | 0.0012 | 0.0 | 0.0049 | 0.0002 | 0.0046 | | 0.9264 | 0.8889 | 40 | 0.7115 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.7753 | 1.1111 | 50 | 0.7572 | 0.0010 | 0.0023 | 0.0038 | nan | 0.0 | 0.0 | 0.0113 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0062 | 0.0 | 0.0 | | 0.6295 | 1.3333 | 60 | 0.5617 | 0.0001 | 0.0002 | 0.0003 | nan | 0.0 | 0.0 | 0.0009 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0009 | 0.0 | 0.0 | | 0.5956 | 1.5556 | 70 | 0.4135 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.5756 | 1.7778 | 80 | 0.2028 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.5318 | 2.0 | 90 | 0.1185 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.5351 | 2.2222 | 100 | 0.3064 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.5706 | 2.4444 | 110 | 0.1378 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.4863 | 2.6667 | 120 | 0.1121 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.3226 | 2.8889 | 130 | 0.2038 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.4139 | 3.1111 | 140 | 0.1520 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.3983 | 3.3333 | 150 | 0.1070 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.3672 | 3.5556 | 160 | 0.1282 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.3324 | 3.7778 | 170 | 0.1075 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.2806 | 4.0 | 180 | 0.2677 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.2854 | 4.2222 | 190 | 0.1020 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.3463 | 4.4444 | 200 | 0.0551 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1957 | 4.6667 | 210 | 0.1982 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.3063 | 4.8889 | 220 | 0.0962 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1933 | 5.1111 | 230 | 0.1172 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1833 | 5.3333 | 240 | 0.0600 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.231 | 5.5556 | 250 | 0.0519 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1516 | 5.7778 | 260 | 0.0575 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.172 | 6.0 | 270 | 0.1182 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1307 | 6.2222 | 280 | 0.0989 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1454 | 6.4444 | 290 | 0.1045 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1319 | 6.6667 | 300 | 0.0793 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1154 | 6.8889 | 310 | 0.0567 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1241 | 7.1111 | 320 | 0.0562 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1379 | 7.3333 | 330 | 0.0700 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1183 | 7.5556 | 340 | 0.0616 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.108 | 7.7778 | 350 | 0.0823 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1204 | 8.0 | 360 | 0.0661 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1391 | 8.2222 | 370 | 0.0578 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1554 | 8.4444 | 380 | 0.0643 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1338 | 8.6667 | 390 | 0.0822 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1358 | 8.8889 | 400 | 0.0997 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1704 | 9.1111 | 410 | 0.0503 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1242 | 9.3333 | 420 | 0.0692 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1153 | 9.5556 | 430 | 0.1003 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0999 | 9.7778 | 440 | 0.0909 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0968 | 10.0 | 450 | 0.0721 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "other", "tags": ["vision", "image-segmentation", "generated_from_trainer"], "base_model": "nvidia/mit-b0", "model-index": [{"name": "mario-semantic-1", "results": []}]}
karthik540/mario-semantic-1
null
[ "transformers", "safetensors", "segformer", "vision", "image-segmentation", "generated_from_trainer", "base_model:nvidia/mit-b0", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-26T19:50:40+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Lodo97/coder-2b-v0.1-hfrl
null
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T19:50:46+00:00
null
transformers
{}
anismahmahi/QCRI
null
[ "transformers", "pytorch", "bert", "endpoints_compatible", "region:us" ]
null
2024-04-26T19:51:55+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_notata-seqsight_4096_512_46M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset. It achieves the following results on the evaluation set: - Loss: 0.3807 - F1 Score: 0.8364 - Accuracy: 0.8364 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.47 | 0.6 | 200 | 0.4062 | 0.8137 | 0.8144 | | 0.405 | 1.2 | 400 | 0.3753 | 0.8331 | 0.8332 | | 0.3902 | 1.81 | 600 | 0.3661 | 0.8386 | 0.8387 | | 0.3818 | 2.41 | 800 | 0.3680 | 0.8363 | 0.8363 | | 0.3757 | 3.01 | 1000 | 0.3655 | 0.8385 | 0.8387 | | 0.3689 | 3.61 | 1200 | 0.3578 | 0.8413 | 0.8413 | | 0.3703 | 4.22 | 1400 | 0.3616 | 0.8397 | 0.8396 | | 0.3694 | 4.82 | 1600 | 0.3579 | 0.8394 | 0.8395 | | 0.3556 | 5.42 | 1800 | 0.3794 | 0.8334 | 0.8336 | | 0.3611 | 6.02 | 2000 | 0.3611 | 0.8407 | 0.8408 | | 0.3539 | 6.63 | 2200 | 0.3585 | 0.8430 | 0.8430 | | 0.3493 | 7.23 | 2400 | 0.3640 | 0.8451 | 0.8453 | | 0.3511 | 7.83 | 2600 | 0.3590 | 0.8455 | 0.8455 | | 0.3419 | 8.43 | 2800 | 0.3626 | 0.8412 | 0.8415 | | 0.3473 | 9.04 | 3000 | 0.3592 | 0.8443 | 0.8444 | | 0.3401 | 9.64 | 3200 | 0.3644 | 0.8464 | 0.8464 | | 0.3358 | 10.24 | 3400 | 0.3688 | 0.8462 | 0.8462 | | 0.3351 | 10.84 | 3600 | 0.3579 | 0.8440 | 0.8440 | | 0.3314 | 11.45 | 3800 | 0.3599 | 0.8427 | 0.8427 | | 0.3321 | 12.05 | 4000 | 0.3602 | 0.8457 | 0.8457 | | 0.3235 | 12.65 | 4200 | 0.3640 | 0.8466 | 0.8466 | | 0.3256 | 13.25 | 4400 | 0.3656 | 0.8436 | 0.8438 | | 0.3269 | 13.86 | 4600 | 0.3658 | 0.8456 | 0.8457 | | 0.3203 | 14.46 | 4800 | 0.3698 | 0.8444 | 0.8445 | | 0.3176 | 15.06 | 5000 | 0.3608 | 0.8449 | 0.8449 | | 0.3121 | 15.66 | 5200 | 0.3632 | 0.8458 | 0.8459 | | 0.3149 | 16.27 | 5400 | 0.3695 | 0.8465 | 0.8466 | | 0.3116 | 16.87 | 5600 | 0.3806 | 0.8414 | 0.8419 | | 0.306 | 17.47 | 5800 | 0.3744 | 0.8403 | 0.8406 | | 0.3172 | 18.07 | 6000 | 0.3707 | 0.8453 | 0.8457 | | 0.3045 | 18.67 | 6200 | 0.3712 | 0.8458 | 0.8459 | | 0.3058 | 19.28 | 6400 | 0.3723 | 0.8437 | 0.8438 | | 0.3024 | 19.88 | 6600 | 0.3660 | 0.8462 | 0.8462 | | 0.2957 | 20.48 | 6800 | 0.3684 | 0.8455 | 0.8457 | | 0.3049 | 21.08 | 7000 | 0.3726 | 0.8426 | 0.8428 | | 0.3004 | 21.69 | 7200 | 0.3743 | 0.8449 | 0.8449 | | 0.2911 | 22.29 | 7400 | 0.3723 | 0.8417 | 0.8419 | | 0.2934 | 22.89 | 7600 | 0.3815 | 0.8423 | 0.8425 | | 0.2913 | 23.49 | 7800 | 0.3742 | 0.8429 | 0.8430 | | 0.2947 | 24.1 | 8000 | 0.3811 | 0.8436 | 0.8438 | | 0.2925 | 24.7 | 8200 | 0.3807 | 0.8405 | 0.8408 | | 0.2832 | 25.3 | 8400 | 0.3903 | 0.8431 | 0.8432 | | 0.2904 | 25.9 | 8600 | 0.3788 | 0.8433 | 0.8434 | | 0.2891 | 26.51 | 8800 | 0.3780 | 0.8449 | 0.8449 | | 0.2876 | 27.11 | 9000 | 0.3794 | 0.8415 | 0.8415 | | 0.2877 | 27.71 | 9200 | 0.3848 | 0.8414 | 0.8415 | | 0.2785 | 28.31 | 9400 | 0.3852 | 0.8391 | 0.8393 | | 0.2839 | 28.92 | 9600 | 0.3865 | 0.8421 | 0.8423 | | 0.2825 | 29.52 | 9800 | 0.3854 | 0.8410 | 0.8412 | | 0.2791 | 30.12 | 10000 | 0.3856 | 0.8420 | 0.8421 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_4096_512_46M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_4096_512_46M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T19:52:02+00:00
text-generation
transformers
<div align="center"> <img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/2FhDh8NDvMl7iSxbQz9BP.png) <div align="center"> <h1>Advancing Open-source Large Language Models in Medical Domain</h1> </div> <p align="center" style="margin-top: 0px;"> <a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">Online Demo</span> </a> | <a href="https://github.com/openlifescience-ai"> <img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">GitHub</span> </a> | <a href="#"> <img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style="margin-right: 5px;">Paper</span> </a> | <a href="https://discord.gg/A5Fjf5zC69"> <img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text">Discord</span> </a> </p> ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/KGmRE5w2sepNtwsEu8t7K.jpeg) Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks. 🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency. 🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks. 🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include: <div align="center"> <img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png"> </div> - **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290) - **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar) - **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated) This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications. ⚙️ **Release Details**: - **Model Size**: 8 billion parameters - **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-8B-GGUF) - **Language(s) (NLP):** en - **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs - **License:** Meta-Llama License - **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) - **Resources for more information:** - Paper: Coming soon The model can be fine-tuned for more specialized tasks and datasets as needed. OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences. We are excited to share OpenBioLLM-8B with researchers and developers around the world. ### Use with transformers **Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.** See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "aaditya/OpenBioLLM-Llama3-8B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="auto", ) messages = [ {"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."}, {"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.0, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ## **Training procedure** ### **Training hyperparameters** <details> <summary>Click to see details</summary> - learning_rate: 0.0002 - lr_scheduler: cosine - train_batch_size: 12 - eval_batch_size: 8 - GPU: H100 80GB SXM5 - num_devices: 1 - optimizer: adamw_bnb_8bit - lr_scheduler_warmup_steps: 100 - num_epochs: 4 </details> ### **Peft hyperparameters** <details> <summary>Click to see details</summary> - adapter: qlora - lora_r: 128 - lora_alpha: 256 - lora_dropout: 0.05 - lora_target_linear: true -lora_target_modules: - q_proj - v_proj - k_proj - o_proj - gate_proj - down_proj - up_proj </details> ### **Training results** ### **Framework versions** - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1 - Axolotl - Lm harness for evaluation # Benchmark Results 🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge. 🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy. | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg | |--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------| | **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** | | Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 | | **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 | | Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 | | **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 | | Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 | | GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 | | Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 | | gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 | | Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 | | Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 | | MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 | | BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 | | AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 | | ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 | <div align="center"> <img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png"> </div> ## Detailed Medical Subjectwise accuracy ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/UXF-V0col0Z0sS6BGPBkE.png) # Use Cases & Examples 🚨 **Below results are from the quantized version of OpenBioLLM-70B** # Summarize Clinical Notes OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/xdwdBgOxNi_TfML0hKlI8.png) # Answer Medical Questions OpenBioLLM-70B can provide answers to a wide range of medical questions. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/zO95GlwOQEZqCKQF69mE6.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/OKBczKw7gWeW5xsuDpc27.png) <details> <summary>Click to see details</summary> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/eJGHT5khppYvJb8fQ-YW4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Cnbwrqa_-ORHRuNRC2P6Y.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/J9DhdcvukAc9mnnW9fj2C.png) </details> # Clinical Entity Recognition OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_69BW4k9LVABFwtxixL45.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/DKy5wYCoPhoPPUc1-x8_J.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/7WD9zCCBZT4-4XlfnIQjl.png) # Biomarkers Extraction ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/ZttoM4AiteT7gFYVhjIpN.png) # Classification OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Bf5MW1d75qT-1F_TR_hC0.png) # De-Identification OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/hKX4kzm--Tw5bj6K78msy.png) **Advisory Notice!**  While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments. Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations. OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor. Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including: - Thorough testing and evaluation in relevant clinical scenarios - Alignment with evidence-based guidelines and best practices - Mitigation of potential biases and failure modes - Integration with human oversight and interpretation - Compliance with regulatory and ethical standards Always consult a qualified healthcare provider for personal medical needs. # Citation If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows: ``` @misc{OpenBioLLMs, author = {Ankit Pal, Malaikannan Sankarasubbu}, title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}} } ``` The accompanying paper is currently in progress and will be released soon. <div align="center"> <h2> 💌 Contact </h2> </div> We look forward to hearing you and collaborating on this exciting project! **Contributors:** - [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com] - Saama AI Labs - Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset. # References We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models! Result sources - [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375) - [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138) - [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617) - [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023)
{"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-8B", "widget": [{"example_title": "OpenBioLLM-8B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-8B", "results": []}]}
LoneStriker/OpenBioLLM-Llama3-8B-8.0bpw-h8-exl2
null
[ "transformers", "pytorch", "llama", "text-generation", "llama-3", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation", "en", "arxiv:2305.18290", "arxiv:2303.13375", "arxiv:2212.13138", "arxiv:2305.09617", "arxiv:2402.07023", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-26T19:52:20+00:00
null
null
{"license": "openrail"}
e2h/StrongNayeonByLeeloLiFixed
null
[ "license:openrail", "region:us" ]
null
2024-04-26T19:57:15+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_tata-seqsight_4096_512_46M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset. It achieves the following results on the evaluation set: - Loss: 0.4677 - F1 Score: 0.8303 - Accuracy: 0.8303 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.564 | 5.13 | 200 | 0.5603 | 0.7059 | 0.7064 | | 0.5228 | 10.26 | 400 | 0.5456 | 0.7313 | 0.7325 | | 0.4936 | 15.38 | 600 | 0.5083 | 0.7529 | 0.7537 | | 0.459 | 20.51 | 800 | 0.4685 | 0.7673 | 0.7684 | | 0.4227 | 25.64 | 1000 | 0.4269 | 0.8025 | 0.8026 | | 0.3929 | 30.77 | 1200 | 0.4184 | 0.8203 | 0.8206 | | 0.3703 | 35.9 | 1400 | 0.4158 | 0.8204 | 0.8206 | | 0.3566 | 41.03 | 1600 | 0.3927 | 0.8400 | 0.8401 | | 0.3452 | 46.15 | 1800 | 0.3935 | 0.8385 | 0.8385 | | 0.33 | 51.28 | 2000 | 0.3986 | 0.8368 | 0.8369 | | 0.3209 | 56.41 | 2200 | 0.3908 | 0.8433 | 0.8434 | | 0.3114 | 61.54 | 2400 | 0.3818 | 0.8449 | 0.8450 | | 0.3025 | 66.67 | 2600 | 0.3809 | 0.8531 | 0.8532 | | 0.2974 | 71.79 | 2800 | 0.3810 | 0.8515 | 0.8515 | | 0.278 | 76.92 | 3000 | 0.3911 | 0.8548 | 0.8548 | | 0.2771 | 82.05 | 3200 | 0.3951 | 0.8385 | 0.8385 | | 0.2645 | 87.18 | 3400 | 0.4001 | 0.8434 | 0.8434 | | 0.2592 | 92.31 | 3600 | 0.4055 | 0.8562 | 0.8564 | | 0.2448 | 97.44 | 3800 | 0.4128 | 0.8513 | 0.8515 | | 0.2415 | 102.56 | 4000 | 0.4101 | 0.8531 | 0.8532 | | 0.2343 | 107.69 | 4200 | 0.4071 | 0.8449 | 0.8450 | | 0.2232 | 112.82 | 4400 | 0.4219 | 0.8463 | 0.8467 | | 0.2209 | 117.95 | 4600 | 0.4118 | 0.8514 | 0.8515 | | 0.2116 | 123.08 | 4800 | 0.4258 | 0.8532 | 0.8532 | | 0.2072 | 128.21 | 5000 | 0.4340 | 0.8578 | 0.8581 | | 0.2006 | 133.33 | 5200 | 0.4217 | 0.8547 | 0.8548 | | 0.1946 | 138.46 | 5400 | 0.4435 | 0.8430 | 0.8434 | | 0.185 | 143.59 | 5600 | 0.4495 | 0.8482 | 0.8483 | | 0.183 | 148.72 | 5800 | 0.4562 | 0.8399 | 0.8401 | | 0.1738 | 153.85 | 6000 | 0.4683 | 0.8495 | 0.8499 | | 0.1735 | 158.97 | 6200 | 0.4558 | 0.8546 | 0.8548 | | 0.17 | 164.1 | 6400 | 0.4687 | 0.8564 | 0.8564 | | 0.1651 | 169.23 | 6600 | 0.4706 | 0.8531 | 0.8532 | | 0.1628 | 174.36 | 6800 | 0.4622 | 0.8515 | 0.8515 | | 0.1592 | 179.49 | 7000 | 0.4657 | 0.8579 | 0.8581 | | 0.1568 | 184.62 | 7200 | 0.4697 | 0.8564 | 0.8564 | | 0.1531 | 189.74 | 7400 | 0.4754 | 0.8515 | 0.8515 | | 0.1519 | 194.87 | 7600 | 0.4839 | 0.8481 | 0.8483 | | 0.1456 | 200.0 | 7800 | 0.4810 | 0.8513 | 0.8515 | | 0.1439 | 205.13 | 8000 | 0.4818 | 0.8433 | 0.8434 | | 0.1409 | 210.26 | 8200 | 0.4847 | 0.8433 | 0.8434 | | 0.1398 | 215.38 | 8400 | 0.4923 | 0.8481 | 0.8483 | | 0.1384 | 220.51 | 8600 | 0.4877 | 0.8482 | 0.8483 | | 0.1407 | 225.64 | 8800 | 0.4909 | 0.8400 | 0.8401 | | 0.1375 | 230.77 | 9000 | 0.4941 | 0.8481 | 0.8483 | | 0.1377 | 235.9 | 9200 | 0.4932 | 0.8450 | 0.8450 | | 0.1371 | 241.03 | 9400 | 0.4942 | 0.8449 | 0.8450 | | 0.1392 | 246.15 | 9600 | 0.4937 | 0.8417 | 0.8418 | | 0.1329 | 251.28 | 9800 | 0.4935 | 0.8465 | 0.8467 | | 0.1306 | 256.41 | 10000 | 0.4939 | 0.8481 | 0.8483 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_4096_512_46M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_4096_512_46M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T19:57:34+00:00
text-to-speech
null
![image/png](https://cdn-uploads.huggingface.co/production/uploads/662c0691b7c202c0841bcb4f/ZtIDazPj6z3It8KUkz03F.png) Wim Hof voice high quality RVC trained 300 epochs on interviews.
{"language": ["en"], "license": "afl-3.0", "tags": ["rvc", "speech", "tts", "voice model"], "pipeline_tag": "text-to-speech"}
Schnorchelgubby/WimHof
null
[ "rvc", "speech", "tts", "voice model", "text-to-speech", "en", "license:afl-3.0", "region:us" ]
null
2024-04-26T19:58:15+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_tata-seqsight_4096_512_46M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset. It achieves the following results on the evaluation set: - Loss: 0.6392 - F1 Score: 0.8303 - Accuracy: 0.8303 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5536 | 5.13 | 200 | 0.5507 | 0.7178 | 0.7178 | | 0.4771 | 10.26 | 400 | 0.4528 | 0.7846 | 0.7847 | | 0.3954 | 15.38 | 600 | 0.4052 | 0.8091 | 0.8091 | | 0.3501 | 20.51 | 800 | 0.4084 | 0.8120 | 0.8124 | | 0.3223 | 25.64 | 1000 | 0.4058 | 0.8278 | 0.8287 | | 0.2912 | 30.77 | 1200 | 0.4098 | 0.8314 | 0.8320 | | 0.2756 | 35.9 | 1400 | 0.3914 | 0.8384 | 0.8385 | | 0.2552 | 41.03 | 1600 | 0.3971 | 0.8350 | 0.8352 | | 0.2373 | 46.15 | 1800 | 0.4074 | 0.8365 | 0.8369 | | 0.2217 | 51.28 | 2000 | 0.4023 | 0.8352 | 0.8352 | | 0.2042 | 56.41 | 2200 | 0.4607 | 0.8334 | 0.8336 | | 0.1924 | 61.54 | 2400 | 0.4388 | 0.8286 | 0.8287 | | 0.1848 | 66.67 | 2600 | 0.4548 | 0.8349 | 0.8352 | | 0.1709 | 71.79 | 2800 | 0.4728 | 0.8366 | 0.8369 | | 0.1558 | 76.92 | 3000 | 0.4994 | 0.8352 | 0.8352 | | 0.1493 | 82.05 | 3200 | 0.5037 | 0.8352 | 0.8352 | | 0.1371 | 87.18 | 3400 | 0.5434 | 0.8401 | 0.8401 | | 0.1331 | 92.31 | 3600 | 0.5410 | 0.8221 | 0.8222 | | 0.1206 | 97.44 | 3800 | 0.5585 | 0.8432 | 0.8434 | | 0.1183 | 102.56 | 4000 | 0.5698 | 0.8416 | 0.8418 | | 0.1081 | 107.69 | 4200 | 0.5582 | 0.8417 | 0.8418 | | 0.105 | 112.82 | 4400 | 0.6159 | 0.8401 | 0.8401 | | 0.0991 | 117.95 | 4600 | 0.6073 | 0.8368 | 0.8369 | | 0.094 | 123.08 | 4800 | 0.6109 | 0.8254 | 0.8254 | | 0.0881 | 128.21 | 5000 | 0.6315 | 0.8352 | 0.8352 | | 0.0883 | 133.33 | 5200 | 0.6070 | 0.8401 | 0.8401 | | 0.0805 | 138.46 | 5400 | 0.6284 | 0.8433 | 0.8434 | | 0.076 | 143.59 | 5600 | 0.6523 | 0.8319 | 0.8320 | | 0.0798 | 148.72 | 5800 | 0.6554 | 0.8401 | 0.8401 | | 0.0728 | 153.85 | 6000 | 0.6709 | 0.8466 | 0.8467 | | 0.0701 | 158.97 | 6200 | 0.6738 | 0.8449 | 0.8450 | | 0.0679 | 164.1 | 6400 | 0.6782 | 0.8417 | 0.8418 | | 0.0687 | 169.23 | 6600 | 0.6762 | 0.8434 | 0.8434 | | 0.0611 | 174.36 | 6800 | 0.6971 | 0.8368 | 0.8369 | | 0.0628 | 179.49 | 7000 | 0.7038 | 0.8352 | 0.8352 | | 0.0577 | 184.62 | 7200 | 0.6977 | 0.8368 | 0.8369 | | 0.0569 | 189.74 | 7400 | 0.6989 | 0.8450 | 0.8450 | | 0.0579 | 194.87 | 7600 | 0.6972 | 0.8450 | 0.8450 | | 0.0572 | 200.0 | 7800 | 0.7021 | 0.8416 | 0.8418 | | 0.0567 | 205.13 | 8000 | 0.7044 | 0.8320 | 0.8320 | | 0.0549 | 210.26 | 8200 | 0.7075 | 0.8433 | 0.8434 | | 0.0493 | 215.38 | 8400 | 0.7109 | 0.8369 | 0.8369 | | 0.0514 | 220.51 | 8600 | 0.7240 | 0.8336 | 0.8336 | | 0.0511 | 225.64 | 8800 | 0.7316 | 0.8401 | 0.8401 | | 0.05 | 230.77 | 9000 | 0.7390 | 0.8418 | 0.8418 | | 0.0501 | 235.9 | 9200 | 0.7306 | 0.8385 | 0.8385 | | 0.0506 | 241.03 | 9400 | 0.7358 | 0.8401 | 0.8401 | | 0.0482 | 246.15 | 9600 | 0.7364 | 0.8418 | 0.8418 | | 0.0464 | 251.28 | 9800 | 0.7357 | 0.8401 | 0.8401 | | 0.0482 | 256.41 | 10000 | 0.7352 | 0.8434 | 0.8434 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_4096_512_46M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_4096_512_46M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T19:58:19+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1 This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
{"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1", "results": []}]}
ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:HuggingFaceH4/mistral-7b-sft-beta", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T19:58:48+00:00
null
null
{"license": "openrail"}
e2h/StrongJihyoByLeeloLiFix
null
[ "license:openrail", "region:us" ]
null
2024-04-26T19:59:11+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/rimy/huggingface/runs/a0v6bf61) # results This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.41.0.dev0 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.1-GPTQ", "model-index": [{"name": "results", "results": []}]}
Rimyy/results
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-04-26T19:59:51+00:00
null
null
{"license": "openrail"}
e2h/strongmomobyleelolifix
null
[ "license:openrail", "region:us" ]
null
2024-04-26T20:00:03+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_tata-seqsight_4096_512_46M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset. It achieves the following results on the evaluation set: - Loss: 0.8199 - F1 Score: 0.8597 - Accuracy: 0.8597 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5372 | 5.13 | 200 | 0.5163 | 0.7428 | 0.7439 | | 0.4183 | 10.26 | 400 | 0.4135 | 0.8167 | 0.8173 | | 0.3363 | 15.38 | 600 | 0.3974 | 0.8319 | 0.8320 | | 0.2859 | 20.51 | 800 | 0.3957 | 0.8416 | 0.8418 | | 0.2401 | 25.64 | 1000 | 0.4302 | 0.8189 | 0.8189 | | 0.2023 | 30.77 | 1200 | 0.4588 | 0.8544 | 0.8548 | | 0.1651 | 35.9 | 1400 | 0.4583 | 0.8450 | 0.8450 | | 0.1387 | 41.03 | 1600 | 0.5110 | 0.8448 | 0.8450 | | 0.1213 | 46.15 | 1800 | 0.5007 | 0.8434 | 0.8434 | | 0.0993 | 51.28 | 2000 | 0.5576 | 0.8450 | 0.8450 | | 0.0873 | 56.41 | 2200 | 0.5942 | 0.8531 | 0.8532 | | 0.0796 | 61.54 | 2400 | 0.6050 | 0.8401 | 0.8401 | | 0.0712 | 66.67 | 2600 | 0.5897 | 0.8463 | 0.8467 | | 0.0625 | 71.79 | 2800 | 0.6404 | 0.8466 | 0.8467 | | 0.0565 | 76.92 | 3000 | 0.6529 | 0.8434 | 0.8434 | | 0.0515 | 82.05 | 3200 | 0.7046 | 0.8499 | 0.8499 | | 0.0491 | 87.18 | 3400 | 0.6990 | 0.8515 | 0.8515 | | 0.0458 | 92.31 | 3600 | 0.7129 | 0.8499 | 0.8499 | | 0.0414 | 97.44 | 3800 | 0.7288 | 0.8482 | 0.8483 | | 0.0401 | 102.56 | 4000 | 0.6625 | 0.8401 | 0.8401 | | 0.0372 | 107.69 | 4200 | 0.6802 | 0.8466 | 0.8467 | | 0.0331 | 112.82 | 4400 | 0.7559 | 0.8548 | 0.8548 | | 0.0327 | 117.95 | 4600 | 0.7604 | 0.8385 | 0.8385 | | 0.0325 | 123.08 | 4800 | 0.7189 | 0.8483 | 0.8483 | | 0.0289 | 128.21 | 5000 | 0.7833 | 0.8483 | 0.8483 | | 0.0292 | 133.33 | 5200 | 0.7775 | 0.8515 | 0.8515 | | 0.0294 | 138.46 | 5400 | 0.7406 | 0.8597 | 0.8597 | | 0.0246 | 143.59 | 5600 | 0.7963 | 0.8548 | 0.8548 | | 0.0256 | 148.72 | 5800 | 0.7805 | 0.8482 | 0.8483 | | 0.0249 | 153.85 | 6000 | 0.8132 | 0.8613 | 0.8613 | | 0.0248 | 158.97 | 6200 | 0.7969 | 0.8531 | 0.8532 | | 0.0223 | 164.1 | 6400 | 0.7997 | 0.8532 | 0.8532 | | 0.0219 | 169.23 | 6600 | 0.8132 | 0.8434 | 0.8434 | | 0.0181 | 174.36 | 6800 | 0.8788 | 0.8483 | 0.8483 | | 0.0205 | 179.49 | 7000 | 0.8876 | 0.8450 | 0.8450 | | 0.0184 | 184.62 | 7200 | 0.8420 | 0.8450 | 0.8450 | | 0.0181 | 189.74 | 7400 | 0.8651 | 0.8548 | 0.8548 | | 0.018 | 194.87 | 7600 | 0.8124 | 0.8564 | 0.8564 | | 0.018 | 200.0 | 7800 | 0.8607 | 0.8499 | 0.8499 | | 0.0183 | 205.13 | 8000 | 0.8451 | 0.8548 | 0.8548 | | 0.0169 | 210.26 | 8200 | 0.8448 | 0.8434 | 0.8434 | | 0.0157 | 215.38 | 8400 | 0.8915 | 0.8434 | 0.8434 | | 0.0152 | 220.51 | 8600 | 0.9034 | 0.8450 | 0.8450 | | 0.0151 | 225.64 | 8800 | 0.9155 | 0.8483 | 0.8483 | | 0.0159 | 230.77 | 9000 | 0.8968 | 0.8515 | 0.8515 | | 0.0136 | 235.9 | 9200 | 0.9146 | 0.8434 | 0.8434 | | 0.0148 | 241.03 | 9400 | 0.8961 | 0.8467 | 0.8467 | | 0.0134 | 246.15 | 9600 | 0.9116 | 0.8483 | 0.8483 | | 0.0149 | 251.28 | 9800 | 0.9067 | 0.8499 | 0.8499 | | 0.0133 | 256.41 | 10000 | 0.9077 | 0.8532 | 0.8532 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_4096_512_46M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_4096_512_46M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T20:00:34+00:00
null
null
{"license": "openrail"}
e2h/strongsanabyleelolifix
null
[ "license:openrail", "region:us" ]
null
2024-04-26T20:00:50+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_opus_books_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.1944 - Bleu: 0.1991 - Gen Len: 18.18 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | 3.6446 | 1.0 | 1617 | 3.2778 | 0.1513 | 18.2069 | | 3.5134 | 2.0 | 3234 | 3.1944 | 0.1991 | 18.18 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "t5-small", "model-index": [{"name": "my_awesome_opus_books_model", "results": []}]}
BohanJiang/my_awesome_opus_books_model
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T20:00:55+00:00
null
null
{"license": "openrail"}
e2h/MinaByLeeloLiFix
null
[ "license:openrail", "region:us" ]
null
2024-04-26T20:01:44+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_all-seqsight_4096_512_46M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.2014 - F1 Score: 0.9186 - Accuracy: 0.9186 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.3473 | 0.54 | 200 | 0.2737 | 0.8912 | 0.8912 | | 0.2744 | 1.08 | 400 | 0.2439 | 0.9019 | 0.9019 | | 0.2531 | 1.62 | 600 | 0.2441 | 0.9004 | 0.9005 | | 0.2354 | 2.16 | 800 | 0.2235 | 0.9095 | 0.9095 | | 0.2294 | 2.7 | 1000 | 0.2301 | 0.9069 | 0.9069 | | 0.2212 | 3.24 | 1200 | 0.2134 | 0.9135 | 0.9135 | | 0.2206 | 3.78 | 1400 | 0.2116 | 0.9144 | 0.9144 | | 0.2135 | 4.32 | 1600 | 0.2096 | 0.9128 | 0.9128 | | 0.2055 | 4.86 | 1800 | 0.2105 | 0.9181 | 0.9181 | | 0.2048 | 5.41 | 2000 | 0.2123 | 0.9160 | 0.9160 | | 0.2031 | 5.95 | 2200 | 0.2002 | 0.9204 | 0.9204 | | 0.1966 | 6.49 | 2400 | 0.2069 | 0.9162 | 0.9162 | | 0.2001 | 7.03 | 2600 | 0.2018 | 0.9211 | 0.9211 | | 0.192 | 7.57 | 2800 | 0.2009 | 0.9163 | 0.9164 | | 0.1942 | 8.11 | 3000 | 0.2094 | 0.9165 | 0.9166 | | 0.1921 | 8.65 | 3200 | 0.1999 | 0.9189 | 0.9189 | | 0.186 | 9.19 | 3400 | 0.2058 | 0.9172 | 0.9172 | | 0.1859 | 9.73 | 3600 | 0.1935 | 0.9216 | 0.9216 | | 0.1944 | 10.27 | 3800 | 0.1906 | 0.9236 | 0.9236 | | 0.1856 | 10.81 | 4000 | 0.1931 | 0.9216 | 0.9216 | | 0.1823 | 11.35 | 4200 | 0.1926 | 0.9241 | 0.9242 | | 0.1809 | 11.89 | 4400 | 0.1965 | 0.9243 | 0.9243 | | 0.181 | 12.43 | 4600 | 0.1884 | 0.9246 | 0.9247 | | 0.1852 | 12.97 | 4800 | 0.1896 | 0.9231 | 0.9231 | | 0.1781 | 13.51 | 5000 | 0.1869 | 0.9240 | 0.9240 | | 0.1809 | 14.05 | 5200 | 0.1890 | 0.9250 | 0.925 | | 0.1774 | 14.59 | 5400 | 0.1824 | 0.9274 | 0.9274 | | 0.175 | 15.14 | 5600 | 0.1836 | 0.9277 | 0.9277 | | 0.1751 | 15.68 | 5800 | 0.1880 | 0.9234 | 0.9235 | | 0.1772 | 16.22 | 6000 | 0.1806 | 0.9289 | 0.9289 | | 0.1753 | 16.76 | 6200 | 0.1841 | 0.9272 | 0.9272 | | 0.1751 | 17.3 | 6400 | 0.1916 | 0.9227 | 0.9228 | | 0.1756 | 17.84 | 6600 | 0.1861 | 0.9248 | 0.9248 | | 0.1682 | 18.38 | 6800 | 0.1863 | 0.9256 | 0.9257 | | 0.1768 | 18.92 | 7000 | 0.1835 | 0.9253 | 0.9253 | | 0.1723 | 19.46 | 7200 | 0.1886 | 0.9260 | 0.9260 | | 0.1714 | 20.0 | 7400 | 0.1823 | 0.9279 | 0.9279 | | 0.1713 | 20.54 | 7600 | 0.1845 | 0.9268 | 0.9269 | | 0.169 | 21.08 | 7800 | 0.1821 | 0.9275 | 0.9275 | | 0.1711 | 21.62 | 8000 | 0.1827 | 0.9267 | 0.9267 | | 0.169 | 22.16 | 8200 | 0.1834 | 0.9282 | 0.9282 | | 0.1667 | 22.7 | 8400 | 0.1854 | 0.9267 | 0.9267 | | 0.1698 | 23.24 | 8600 | 0.1871 | 0.9267 | 0.9267 | | 0.1669 | 23.78 | 8800 | 0.1821 | 0.9272 | 0.9272 | | 0.1697 | 24.32 | 9000 | 0.1872 | 0.9260 | 0.9260 | | 0.1653 | 24.86 | 9200 | 0.1853 | 0.9268 | 0.9269 | | 0.1681 | 25.41 | 9400 | 0.1871 | 0.9263 | 0.9264 | | 0.1668 | 25.95 | 9600 | 0.1848 | 0.9280 | 0.9280 | | 0.1661 | 26.49 | 9800 | 0.1822 | 0.9279 | 0.9279 | | 0.1645 | 27.03 | 10000 | 0.1839 | 0.9275 | 0.9275 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_4096_512_46M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_4096_512_46M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T20:04:02+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MentalRoberta-GPT2_EmpAI_EOSbajo This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2970 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 8000 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:-----:|:---------------:| | 0.3476 | 0.9999 | 5998 | 0.3400 | | 0.2941 | 2.0 | 11997 | 0.3041 | | 0.2917 | 2.9997 | 17994 | 0.2970 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "openai-community/gpt2", "model-index": [{"name": "MentalRoberta-GPT2_EmpAI_EOSbajo", "results": []}]}
LuangMV97/MentalRoberta-GPT2_EmpAI_EOSbajo
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T20:05:09+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/w6doc6r
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T20:06:24+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/umk17z9
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T20:06:24+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/kgk4u66
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T20:06:24+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/tn3hfw0
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T20:06:24+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/9m47dti
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T20:06:25+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/dj4d7mu
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T20:06:25+00:00
null
null
{}
Sterling1/Df
null
[ "region:us" ]
null
2024-04-26T20:06:48+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # code-llama-7b-text-to-sql This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
{"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "codellama/CodeLlama-7b-hf", "model-index": [{"name": "code-llama-7b-text-to-sql", "results": []}]}
nan-hu-earnin/code-llama-7b-text-to-sql
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:codellama/CodeLlama-7b-hf", "license:llama2", "region:us" ]
null
2024-04-26T20:07:06+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_new_lm This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the eli5_category dataset. It achieves the following results on the evaluation set: - Loss: 3.5791 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.6897 | 1.0 | 1288 | 3.5826 | | 3.5716 | 2.0 | 2576 | 3.5785 | | 3.5233 | 3.0 | 3864 | 3.5791 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["eli5_category"], "base_model": "gpt2", "model-index": [{"name": "my_awesome_new_lm", "results": []}]}
ljgries/my_awesome_new_lm
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "dataset:eli5_category", "base_model:gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T20:07:55+00:00
text-generation
transformers
<div align="center"> # TinyLlama-1.1B </div> We used this version of TinyLlama as a base model: https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0 The goal was to improve performance on basic algebra (i.e. solving systems of linear equations). The base model was fine tuned on 8k rows synthetic solution data generated by [OpenMath-Mistral-7B-v0.1-hf](https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1-hf) on [ALG-514](https://paperswithcode.com/sota/math-word-problem-solving-on-alg514). We used the [NeMo Skills](https://github.com/Kipok/NeMo-Skills) pipeline for inference with code execution and generating the synthetic data. HuggingFace's SFTTrainer was used for fine tuning, as the NeMo Skills pipeline is a buggy mess. It took 30 minutes to fine tune on an RTX3090. Notes from previous model cards: > We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint. #### Eval ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64388bdd43d932c4623e4983/H07dGzwOfzcvP1GFA1GUq.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64388bdd43d932c4623e4983/Qr7rvIms3AL67jltHBXnr.png) Note that `checkpoint_0` is the base model and `checkpoint_mistral` is OpenMath-Mistral-7B-v0.1-hf. The performance is _not good_™, but this model could be used to quickly generate synthetic data, as the coverage is decent for this dataset. The uploaded model is checkpoint-2.6k. People involved in creating this fine tune: - Coulton Theuer [[email protected]] - Bret Ellenbogen [[email protected]] - Victoria Chang [[email protected]]
{"language": ["en"], "license": "apache-2.0"}
theuerc/TinyLlama-1.1B-Chat-Math-v1.0
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T20:08:33+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_all-seqsight_4096_512_46M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.1975 - F1 Score: 0.9246 - Accuracy: 0.9247 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.3196 | 0.54 | 200 | 0.2469 | 0.9029 | 0.9029 | | 0.2439 | 1.08 | 400 | 0.2205 | 0.9132 | 0.9132 | | 0.2294 | 1.62 | 600 | 0.2277 | 0.9057 | 0.9057 | | 0.2141 | 2.16 | 800 | 0.2027 | 0.9209 | 0.9209 | | 0.2083 | 2.7 | 1000 | 0.2019 | 0.9179 | 0.9179 | | 0.2013 | 3.24 | 1200 | 0.1946 | 0.9221 | 0.9221 | | 0.1984 | 3.78 | 1400 | 0.1923 | 0.9223 | 0.9223 | | 0.194 | 4.32 | 1600 | 0.1892 | 0.9240 | 0.9240 | | 0.1855 | 4.86 | 1800 | 0.2015 | 0.9202 | 0.9203 | | 0.1829 | 5.41 | 2000 | 0.2014 | 0.9214 | 0.9215 | | 0.1822 | 5.95 | 2200 | 0.1839 | 0.9243 | 0.9243 | | 0.175 | 6.49 | 2400 | 0.1885 | 0.9207 | 0.9208 | | 0.1776 | 7.03 | 2600 | 0.1802 | 0.9279 | 0.9279 | | 0.1678 | 7.57 | 2800 | 0.1878 | 0.9214 | 0.9215 | | 0.1716 | 8.11 | 3000 | 0.1953 | 0.9190 | 0.9191 | | 0.1665 | 8.65 | 3200 | 0.1772 | 0.9233 | 0.9233 | | 0.1611 | 9.19 | 3400 | 0.1872 | 0.9239 | 0.9240 | | 0.1603 | 9.73 | 3600 | 0.1788 | 0.9273 | 0.9274 | | 0.1684 | 10.27 | 3800 | 0.1743 | 0.9284 | 0.9284 | | 0.1605 | 10.81 | 4000 | 0.1840 | 0.9246 | 0.9247 | | 0.1575 | 11.35 | 4200 | 0.1830 | 0.9273 | 0.9274 | | 0.155 | 11.89 | 4400 | 0.1807 | 0.9265 | 0.9265 | | 0.153 | 12.43 | 4600 | 0.1738 | 0.9295 | 0.9296 | | 0.1549 | 12.97 | 4800 | 0.1693 | 0.9309 | 0.9309 | | 0.1497 | 13.51 | 5000 | 0.1735 | 0.9295 | 0.9296 | | 0.1495 | 14.05 | 5200 | 0.1743 | 0.9299 | 0.9299 | | 0.1476 | 14.59 | 5400 | 0.1701 | 0.9324 | 0.9324 | | 0.1461 | 15.14 | 5600 | 0.1743 | 0.9307 | 0.9307 | | 0.1451 | 15.68 | 5800 | 0.1791 | 0.9280 | 0.9280 | | 0.1456 | 16.22 | 6000 | 0.1684 | 0.9326 | 0.9326 | | 0.1436 | 16.76 | 6200 | 0.1755 | 0.9314 | 0.9314 | | 0.1431 | 17.3 | 6400 | 0.1776 | 0.9297 | 0.9297 | | 0.1452 | 17.84 | 6600 | 0.1743 | 0.9312 | 0.9313 | | 0.1373 | 18.38 | 6800 | 0.1736 | 0.9317 | 0.9318 | | 0.1419 | 18.92 | 7000 | 0.1694 | 0.9334 | 0.9334 | | 0.1397 | 19.46 | 7200 | 0.1785 | 0.9304 | 0.9304 | | 0.1366 | 20.0 | 7400 | 0.1709 | 0.9326 | 0.9326 | | 0.1362 | 20.54 | 7600 | 0.1760 | 0.9314 | 0.9314 | | 0.1362 | 21.08 | 7800 | 0.1705 | 0.9333 | 0.9333 | | 0.1351 | 21.62 | 8000 | 0.1751 | 0.9324 | 0.9324 | | 0.1337 | 22.16 | 8200 | 0.1726 | 0.9317 | 0.9318 | | 0.1303 | 22.7 | 8400 | 0.1738 | 0.9317 | 0.9318 | | 0.1305 | 23.24 | 8600 | 0.1715 | 0.9341 | 0.9341 | | 0.1323 | 23.78 | 8800 | 0.1717 | 0.9329 | 0.9329 | | 0.1312 | 24.32 | 9000 | 0.1781 | 0.9305 | 0.9306 | | 0.1328 | 24.86 | 9200 | 0.1716 | 0.9339 | 0.9340 | | 0.1294 | 25.41 | 9400 | 0.1773 | 0.9305 | 0.9306 | | 0.1321 | 25.95 | 9600 | 0.1749 | 0.9317 | 0.9318 | | 0.1293 | 26.49 | 9800 | 0.1729 | 0.9327 | 0.9328 | | 0.1284 | 27.03 | 10000 | 0.1738 | 0.9321 | 0.9321 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_4096_512_46M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_4096_512_46M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T20:08:57+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_all-seqsight_4096_512_46M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.1915 - F1 Score: 0.9236 - Accuracy: 0.9236 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.2961 | 0.54 | 200 | 0.2382 | 0.9083 | 0.9083 | | 0.2282 | 1.08 | 400 | 0.2118 | 0.9167 | 0.9167 | | 0.2166 | 1.62 | 600 | 0.2036 | 0.9191 | 0.9191 | | 0.1989 | 2.16 | 800 | 0.1856 | 0.9255 | 0.9255 | | 0.1936 | 2.7 | 1000 | 0.1904 | 0.9202 | 0.9203 | | 0.1846 | 3.24 | 1200 | 0.1763 | 0.9304 | 0.9304 | | 0.1803 | 3.78 | 1400 | 0.1784 | 0.9246 | 0.9247 | | 0.1757 | 4.32 | 1600 | 0.1797 | 0.9268 | 0.9269 | | 0.1685 | 4.86 | 1800 | 0.1864 | 0.9212 | 0.9213 | | 0.1645 | 5.41 | 2000 | 0.1825 | 0.9244 | 0.9245 | | 0.1649 | 5.95 | 2200 | 0.1713 | 0.9287 | 0.9287 | | 0.1554 | 6.49 | 2400 | 0.1837 | 0.9203 | 0.9204 | | 0.1596 | 7.03 | 2600 | 0.1654 | 0.9326 | 0.9326 | | 0.149 | 7.57 | 2800 | 0.1679 | 0.9283 | 0.9284 | | 0.1494 | 8.11 | 3000 | 0.1779 | 0.9294 | 0.9294 | | 0.1467 | 8.65 | 3200 | 0.1604 | 0.9336 | 0.9336 | | 0.1393 | 9.19 | 3400 | 0.1757 | 0.9266 | 0.9267 | | 0.1376 | 9.73 | 3600 | 0.1739 | 0.9297 | 0.9297 | | 0.1451 | 10.27 | 3800 | 0.1667 | 0.9304 | 0.9304 | | 0.1377 | 10.81 | 4000 | 0.1770 | 0.9295 | 0.9296 | | 0.1322 | 11.35 | 4200 | 0.1680 | 0.9324 | 0.9324 | | 0.1312 | 11.89 | 4400 | 0.1736 | 0.9317 | 0.9318 | | 0.1284 | 12.43 | 4600 | 0.1709 | 0.9322 | 0.9323 | | 0.1284 | 12.97 | 4800 | 0.1614 | 0.9351 | 0.9351 | | 0.1227 | 13.51 | 5000 | 0.1781 | 0.9285 | 0.9285 | | 0.1231 | 14.05 | 5200 | 0.1704 | 0.9305 | 0.9306 | | 0.1182 | 14.59 | 5400 | 0.1663 | 0.9341 | 0.9341 | | 0.1185 | 15.14 | 5600 | 0.1773 | 0.9281 | 0.9282 | | 0.1149 | 15.68 | 5800 | 0.1893 | 0.9298 | 0.9299 | | 0.116 | 16.22 | 6000 | 0.1670 | 0.9350 | 0.9350 | | 0.1114 | 16.76 | 6200 | 0.1812 | 0.9303 | 0.9304 | | 0.1122 | 17.3 | 6400 | 0.1755 | 0.9303 | 0.9304 | | 0.1115 | 17.84 | 6600 | 0.1776 | 0.9307 | 0.9307 | | 0.1043 | 18.38 | 6800 | 0.1797 | 0.9329 | 0.9329 | | 0.1069 | 18.92 | 7000 | 0.1718 | 0.9339 | 0.9340 | | 0.1046 | 19.46 | 7200 | 0.1826 | 0.9319 | 0.9319 | | 0.1016 | 20.0 | 7400 | 0.1825 | 0.9319 | 0.9319 | | 0.1013 | 20.54 | 7600 | 0.1754 | 0.9331 | 0.9331 | | 0.1019 | 21.08 | 7800 | 0.1747 | 0.9341 | 0.9341 | | 0.0981 | 21.62 | 8000 | 0.1815 | 0.9322 | 0.9323 | | 0.0977 | 22.16 | 8200 | 0.1801 | 0.9331 | 0.9331 | | 0.0926 | 22.7 | 8400 | 0.1866 | 0.9326 | 0.9326 | | 0.095 | 23.24 | 8600 | 0.1792 | 0.9350 | 0.9350 | | 0.0926 | 23.78 | 8800 | 0.1840 | 0.9336 | 0.9336 | | 0.0926 | 24.32 | 9000 | 0.1880 | 0.9324 | 0.9324 | | 0.0911 | 24.86 | 9200 | 0.1821 | 0.9346 | 0.9346 | | 0.088 | 25.41 | 9400 | 0.1905 | 0.9329 | 0.9329 | | 0.0924 | 25.95 | 9600 | 0.1874 | 0.9321 | 0.9321 | | 0.0899 | 26.49 | 9800 | 0.1867 | 0.9327 | 0.9328 | | 0.0878 | 27.03 | 10000 | 0.1868 | 0.9322 | 0.9323 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_4096_512_46M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_4096_512_46M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T20:09:05+00:00
null
null
{}
AK232003/gpt2_reward_model
null
[ "region:us" ]
null
2024-04-26T20:10:34+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K14ac-seqsight_4096_512_46M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.4728 - F1 Score: 0.7696 - Accuracy: 0.7691 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5656 | 0.97 | 200 | 0.5316 | 0.7443 | 0.7425 | | 0.524 | 1.93 | 400 | 0.5002 | 0.7667 | 0.7655 | | 0.5078 | 2.9 | 600 | 0.5143 | 0.7621 | 0.7607 | | 0.5036 | 3.86 | 800 | 0.4868 | 0.7737 | 0.7737 | | 0.4995 | 4.83 | 1000 | 0.5012 | 0.7639 | 0.7625 | | 0.4916 | 5.8 | 1200 | 0.4945 | 0.7695 | 0.7679 | | 0.4904 | 6.76 | 1400 | 0.4811 | 0.7734 | 0.7719 | | 0.4851 | 7.73 | 1600 | 0.4808 | 0.7789 | 0.7776 | | 0.4825 | 8.7 | 1800 | 0.4791 | 0.7782 | 0.7767 | | 0.4775 | 9.66 | 2000 | 0.4805 | 0.7776 | 0.7761 | | 0.483 | 10.63 | 2200 | 0.4928 | 0.7712 | 0.7697 | | 0.4717 | 11.59 | 2400 | 0.4730 | 0.7798 | 0.7785 | | 0.4769 | 12.56 | 2600 | 0.4842 | 0.7779 | 0.7764 | | 0.4716 | 13.53 | 2800 | 0.4786 | 0.7746 | 0.7731 | | 0.4732 | 14.49 | 3000 | 0.4813 | 0.7731 | 0.7716 | | 0.4646 | 15.46 | 3200 | 0.4749 | 0.7800 | 0.7788 | | 0.4703 | 16.43 | 3400 | 0.4835 | 0.7749 | 0.7734 | | 0.4664 | 17.39 | 3600 | 0.4700 | 0.7805 | 0.7794 | | 0.4645 | 18.36 | 3800 | 0.4715 | 0.7784 | 0.7785 | | 0.4633 | 19.32 | 4000 | 0.4713 | 0.7847 | 0.7834 | | 0.4597 | 20.29 | 4200 | 0.4761 | 0.7812 | 0.7800 | | 0.462 | 21.26 | 4400 | 0.4740 | 0.7863 | 0.7849 | | 0.4596 | 22.22 | 4600 | 0.4812 | 0.7785 | 0.7770 | | 0.4597 | 23.19 | 4800 | 0.4693 | 0.7873 | 0.7861 | | 0.459 | 24.15 | 5000 | 0.4714 | 0.7849 | 0.7837 | | 0.4567 | 25.12 | 5200 | 0.4793 | 0.7812 | 0.7797 | | 0.4551 | 26.09 | 5400 | 0.4692 | 0.7857 | 0.7843 | | 0.4562 | 27.05 | 5600 | 0.4773 | 0.7833 | 0.7818 | | 0.4522 | 28.02 | 5800 | 0.4689 | 0.7874 | 0.7864 | | 0.4536 | 28.99 | 6000 | 0.4760 | 0.7824 | 0.7809 | | 0.4521 | 29.95 | 6200 | 0.4757 | 0.7851 | 0.7837 | | 0.4519 | 30.92 | 6400 | 0.4663 | 0.7874 | 0.7867 | | 0.4512 | 31.88 | 6600 | 0.4705 | 0.7877 | 0.7864 | | 0.4482 | 32.85 | 6800 | 0.4728 | 0.7890 | 0.7876 | | 0.451 | 33.82 | 7000 | 0.4766 | 0.7830 | 0.7815 | | 0.4483 | 34.78 | 7200 | 0.4779 | 0.7837 | 0.7821 | | 0.4497 | 35.75 | 7400 | 0.4693 | 0.7879 | 0.7867 | | 0.4466 | 36.71 | 7600 | 0.4706 | 0.7886 | 0.7873 | | 0.4458 | 37.68 | 7800 | 0.4705 | 0.7880 | 0.7867 | | 0.4459 | 38.65 | 8000 | 0.4674 | 0.7897 | 0.7888 | | 0.4516 | 39.61 | 8200 | 0.4700 | 0.7889 | 0.7876 | | 0.4434 | 40.58 | 8400 | 0.4737 | 0.7866 | 0.7852 | | 0.4459 | 41.55 | 8600 | 0.4691 | 0.7882 | 0.7870 | | 0.4429 | 42.51 | 8800 | 0.4703 | 0.7895 | 0.7882 | | 0.4472 | 43.48 | 9000 | 0.4776 | 0.7878 | 0.7864 | | 0.4449 | 44.44 | 9200 | 0.4728 | 0.7878 | 0.7864 | | 0.4463 | 45.41 | 9400 | 0.4711 | 0.7874 | 0.7861 | | 0.4401 | 46.38 | 9600 | 0.4737 | 0.7881 | 0.7867 | | 0.4472 | 47.34 | 9800 | 0.4711 | 0.7871 | 0.7858 | | 0.445 | 48.31 | 10000 | 0.4707 | 0.7886 | 0.7873 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_4096_512_46M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_4096_512_46M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T20:11:15+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K14ac-seqsight_4096_512_46M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.4760 - F1 Score: 0.7749 - Accuracy: 0.7746 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5506 | 0.97 | 200 | 0.5042 | 0.7665 | 0.7649 | | 0.5061 | 1.93 | 400 | 0.4880 | 0.7721 | 0.7707 | | 0.4929 | 2.9 | 600 | 0.5016 | 0.7645 | 0.7631 | | 0.4883 | 3.86 | 800 | 0.4736 | 0.7791 | 0.7785 | | 0.4837 | 4.83 | 1000 | 0.5029 | 0.7614 | 0.7604 | | 0.4758 | 5.8 | 1200 | 0.4965 | 0.7623 | 0.7610 | | 0.4725 | 6.76 | 1400 | 0.4706 | 0.7836 | 0.7821 | | 0.4665 | 7.73 | 1600 | 0.4736 | 0.7857 | 0.7843 | | 0.4634 | 8.7 | 1800 | 0.4804 | 0.7809 | 0.7794 | | 0.4562 | 9.66 | 2000 | 0.4784 | 0.7785 | 0.7770 | | 0.4592 | 10.63 | 2200 | 0.4830 | 0.7806 | 0.7791 | | 0.4498 | 11.59 | 2400 | 0.4708 | 0.7844 | 0.7831 | | 0.4515 | 12.56 | 2600 | 0.4800 | 0.7815 | 0.7800 | | 0.445 | 13.53 | 2800 | 0.4796 | 0.7728 | 0.7713 | | 0.4446 | 14.49 | 3000 | 0.4770 | 0.7803 | 0.7788 | | 0.4338 | 15.46 | 3200 | 0.4799 | 0.7835 | 0.7825 | | 0.4396 | 16.43 | 3400 | 0.4798 | 0.7797 | 0.7782 | | 0.4335 | 17.39 | 3600 | 0.4743 | 0.7841 | 0.7828 | | 0.429 | 18.36 | 3800 | 0.4714 | 0.7858 | 0.7858 | | 0.4269 | 19.32 | 4000 | 0.4705 | 0.7920 | 0.7912 | | 0.4222 | 20.29 | 4200 | 0.4872 | 0.7809 | 0.7800 | | 0.426 | 21.26 | 4400 | 0.4792 | 0.7833 | 0.7818 | | 0.4192 | 22.22 | 4600 | 0.4964 | 0.7758 | 0.7743 | | 0.418 | 23.19 | 4800 | 0.4780 | 0.7823 | 0.7812 | | 0.4172 | 24.15 | 5000 | 0.4955 | 0.7748 | 0.7734 | | 0.4118 | 25.12 | 5200 | 0.5083 | 0.7752 | 0.7737 | | 0.4093 | 26.09 | 5400 | 0.4897 | 0.7761 | 0.7746 | | 0.4119 | 27.05 | 5600 | 0.5046 | 0.7707 | 0.7691 | | 0.4055 | 28.02 | 5800 | 0.4882 | 0.7847 | 0.7834 | | 0.405 | 28.99 | 6000 | 0.4886 | 0.7788 | 0.7773 | | 0.4024 | 29.95 | 6200 | 0.4903 | 0.7714 | 0.7700 | | 0.4001 | 30.92 | 6400 | 0.4825 | 0.7804 | 0.7803 | | 0.3992 | 31.88 | 6600 | 0.4916 | 0.7755 | 0.7746 | | 0.3932 | 32.85 | 6800 | 0.5003 | 0.7751 | 0.7737 | | 0.3965 | 33.82 | 7000 | 0.5031 | 0.7695 | 0.7679 | | 0.3912 | 34.78 | 7200 | 0.5025 | 0.7734 | 0.7719 | | 0.3922 | 35.75 | 7400 | 0.4921 | 0.7713 | 0.7700 | | 0.3893 | 36.71 | 7600 | 0.4995 | 0.7765 | 0.7752 | | 0.386 | 37.68 | 7800 | 0.5018 | 0.7730 | 0.7716 | | 0.3874 | 38.65 | 8000 | 0.5012 | 0.7749 | 0.7737 | | 0.3909 | 39.61 | 8200 | 0.4984 | 0.7721 | 0.7710 | | 0.382 | 40.58 | 8400 | 0.5084 | 0.7713 | 0.7697 | | 0.3837 | 41.55 | 8600 | 0.5034 | 0.7743 | 0.7731 | | 0.3819 | 42.51 | 8800 | 0.5033 | 0.7757 | 0.7746 | | 0.3829 | 43.48 | 9000 | 0.5079 | 0.7757 | 0.7743 | | 0.381 | 44.44 | 9200 | 0.5102 | 0.7727 | 0.7713 | | 0.3843 | 45.41 | 9400 | 0.5049 | 0.7747 | 0.7734 | | 0.376 | 46.38 | 9600 | 0.5101 | 0.7730 | 0.7716 | | 0.3797 | 47.34 | 9800 | 0.5075 | 0.7729 | 0.7716 | | 0.3789 | 48.31 | 10000 | 0.5064 | 0.7740 | 0.7728 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_4096_512_46M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_4096_512_46M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T20:13:41+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
AmrMohamed/Meta-Llama-3-8B-grading
null
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T20:14:26+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # outputs This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "library_name": "peft", "tags": ["trl", "reward-trainer", "generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "outputs", "results": []}]}
AK232003/outputs
null
[ "peft", "safetensors", "trl", "reward-trainer", "generated_from_trainer", "base_model:gpt2", "license:mit", "region:us" ]
null
2024-04-26T20:14:54+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0623 - Precision: 0.9346 - Recall: 0.9505 - F1: 0.9425 - Accuracy: 0.9864 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0777 | 1.0 | 1756 | 0.0721 | 0.9129 | 0.9325 | 0.9226 | 0.9814 | | 0.036 | 2.0 | 3512 | 0.0604 | 0.9309 | 0.9477 | 0.9392 | 0.9859 | | 0.0186 | 3.0 | 5268 | 0.0623 | 0.9346 | 0.9505 | 0.9425 | 0.9864 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.1+cpu - Datasets 2.19.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "bert-base-cased", "model-index": [{"name": "bert-finetuned-ner", "results": []}]}
HFDON/bert-finetuned-ner
null
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T20:15:58+00:00
null
null
{}
GilbertoRigotti/Mercadinho
null
[ "region:us" ]
null
2024-04-26T20:16:18+00:00
null
null
<style> .title-container { display: flex; justify-content: center; align-items: center; height: 25vh; /* Adjust this value to position the title vertically */ } .title { font-size: 2.5em; text-align: center; color: #333; font-family: "lucida sans unicode", "lucida grande", sans-serif; font-style: italic; font-weight: bold; font-variant: small-caps; letter-spacing: 0.05em; padding: 0.5em 0; background: transparent; } .title span { background: -webkit-linear-gradient(45deg, #99E2FF, #FF5C95); -webkit-background-clip: text; -webkit-text-fill-color: transparent; } .custom-table { table-layout: fixed; width: 100%; border-collapse: collapse; margin-top: 0em; } .custom-table td { width: 50%; vertical-align: top; padding: 5px; box-shadow: 0px 0px 0px 0px rgba(0, 0, 0, 0.15); } .custom-image-container { position: relative; width: 100%; margin-bottom: 0em; overflow: hidden; border-radius: 5px; transition: transform .7s; /* Smooth transition for the container */ } .custom-image-container:hover { transform: scale(1.05); /* Scale the container on hover */ } .custom-image { width: 100%; height: auto; object-fit: cover; border-radius: 10px; transition: transform .7s; margin-bottom: 0em; } .custom-button { width: auto; height: 20px; object-fit: cover; border-radius: 5px; transition: transform .7s; margin-bottom: 0em; display: block; margin-left: auto; margin-right: auto; } </style> # Trails/Kiseki series - 192 characters/outfits ## __Models created by user [irrel@Civitai](https://civitai.com/user/irrel).__ ## __Originally published [here](https://civitai.com/models/417827).__ This is a set of 192 characters/outfits from the Trails series for use with Pony Diffusion V6 XL or any of its derivatives. I chose to separate them into individual loras for improved accuracy.\ The .zip file contains: * 192 lora files (~20MB each) * A folder with 2 text files containing example prompts for each character * kiseki.txt (characters' physical features and outfit) * kiseki-noclothes.txt (characters' physical features only)\ The models should work with most style loras. They are also flexible enough to support simple custom outfits. Some of the more obscure characters might appear a bit stiff as they had to be trained entirely on 3D renders. <table class="custom-table"> <tr> <td> <div class="custom-image-container"> <img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/5e2f634f-f54d-4506-bcdb-046ed72bf9de/original=true/02540-8.jpeg" alt="sample2"> </div> </td> <td> <div class="custom-image-container"> <img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/4fa84748-8c7f-4ebf-9ce0-47c19724e13c/original=true/02222-1349480391.jpeg" alt="sample1"> </div> </td> <td> <div class="custom-image-container"> <img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/ebc1f9b4-8baa-493d-98b0-b62a585b47b1/original=true/02381-3917646338.jpeg" alt="sample1"> </div> </td> <td> <div class="custom-image-container"> <img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/ceb352de-d566-4e9c-b1d4-a9e3630974ca/original=true/01617-206421237.jpeg" alt="sample1"> </div> </td> </table> <table class="custom-table"> <td> <div class="custom-image-container"> <img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/d6be2e40-c496-4e33-9fee-c4798da8672c/original=true/01618-1856211698.jpeg" alt="sample1"> </div> </td> <td> <div class="custom-image-container"> <img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/2ba0670a-4404-48f0-8591-06f0b302c8ae/original=true/01691-382581776.jpeg" alt="sample2"> </div> </td> <td> <div class="custom-image-container"> <img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/d95bcecf-47db-4779-a072-36ad28f1b9da/original=true/01242-123.jpeg" alt="sample1"> </div> </td> <td> <div class="custom-image-container"> <img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/53ad44f2-9c4d-406a-a9b6-9ea7014168c2/original=true/02308-3917646265.jpeg" alt="sample1"> </div> </td> </tr> </table> <table class="custom-table"> <tr> <td> <div class="custom-image-container"> <img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/d7f62f19-8033-4a44-9160-ecb96fafad0e/original=true/01873-4122203611.jpeg" alt="sample1"> </div> </td> <td> <div class="custom-image-container"> <img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/1484726c-e3ee-4eda-8f39-dccab9827c01/original=true/01712-3795625343.jpeg" alt="sample1"> </div> </td> <td> <div class="custom-image-container"> <img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/ca9a0c82-ac35-4999-8c0c-8abc0a5f5453/original=true/02299-3917646256.jpeg" alt="sample1"> </div> </td> <td> <div class="custom-image-container"> <img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/2a7a51c3-88e6-4a8b-a4c1-a240509acd67/original=true/01904-1427461507.jpeg" alt="sample1"> </div> </td> </table> <table class="custom-table"> <td> <div class="custom-image-container"> <img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/1a8684cb-5fe2-4942-8247-1e71e0ac20e2/original=true/00926-3008217640.jpeg" alt="sample2"> </div> </td> <td> <div class="custom-image-container"> <img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/1c34eff0-30b3-4817-a07f-be1eeabe7ba9/original=true/01530-3892649669.jpeg" alt="sample1"> </div> </td> <td> <div class="custom-image-container"> <img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/a08256a0-d68b-43b4-a02f-ccae086b591f/original=true/01528-2935231619.jpeg" alt="sample1"> </div> </td> <td> <div class="custom-image-container"> <img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/20d8c0cc-96cb-4ea7-94c9-a4301d21694d/original=true/01370-3301814111.jpeg" alt="sample1"> </div> </td> </tr> </table> ## Included characters: 1. Ada Grant 2. Aeolia 3. Agnes Claudel (5 outfits) 4. Alfin Reise Arnor 5. Alisa Reinford (3 outfits) 6. Altera 7. Altina Orion (5 outfits) 8. Anelace Elfead 9. Angelica Rogner (2 outfits) 10. Annabelle 11. rianrhod 12. shen Lu 13. urelia Le Guin 14. urier Vander 15. Becky 16. Bennet 17. Beryl 18. Cecile Neues (2 outfits) 19. Celine Millstein 20. Celis Ortesia (2 outfits) 21. Claire Rieveldt (3 outfits) 22. Clarice Seeker 23. Diana 24. Dorothee 25. Duvalie (2 outfits) 26. Edel 27. Elaine Auclair (3 outfits) 28. Elie MacDowell (3 outfits) 29. Elise Schwarzer 30. Emily 31. Emma Millstein (3 outfits) 32. Ennea 33. Erika Russell 34. Esmeray Archette (2 outfits) 35. Estelle Bright (3 outfits) 36. Fatma Worzel 37. Feri Al-Fayed (2 outfits) 38. Ferris Florald 39. Fie Claussell (2 outfits) 40. Fiona Craig 41. Flotte 42. Fran Seekr 43. Grace Lynn 44. Grandmaster 45. Grimcats 46. Hal Coleman 47. Hermes 48. Ilya Platiere (2 outfits) 49. Ines 50. Irina Reinford 51. Jessica Schleiden 52. Jingo 53. Jolda 54. Josette Capua (2 outfits) 55. Judith Ranster (2 outfits) 56. Julia Schwar 57. Juna Crawford (3 outfits) 58. Kaela MacMillan 59. Kate (2 outfits) 60. KeA (2 outfits) 61. Kilika Rouran 62. Kisara (2 outfits) 63. Klaudia Von Auslese (3 outfits) 64. Lapis Rosenberg 65. Laura S Arseid 66. Lavian Winslet 67. Leonora 68. Lila (2 outfits) 69. Lina Crawford 70. Linde 71. Louise (2 outfits) 72. Lucrezia Isselee 73. Lucy Seiland 74. Lynn 75. Mare 76. Mariabell Crois (2 outfits) 77. Marielle Ayme 78. Maya 79. Millium Orion 80. Mint 81. Mirabel Aalton 82. Mireille 83. Misty 84. Musse Egret (3 outfits) 85. Nadia Rayne (2 outfits) 86. Naje Berca 87. Nina Fenly 88. Noel Seeker 89. Odette 90. Olympia 91. Patiry 92. Paulette 93. Priscilla Reise Arnor 94. Rebecca 95. Regina 96. Renne Bright (4 outfits) 97. Ries Argent 98. Risette Twinings (3 outfits) 99. Rixia Mao (3 outfits) 100. Rosalie 101. Roselia Millstein (2 versions) 102. Rosine 103. Saara (2 outfits) 104. Sandy 105. Sara Valestein 106. Scherazard Harvey 107. Shahina (2 outfits) 108. Shanshan 109. Sharon Kreuger (2 outfits) 110. Shino 111. Shirley Orlando (3 outfits) 112. Shizuku MacLaine (2 outfits) 113. Shizuna Rem Misurugi (2 outfits) 114. Sonya Baelz 115. Sophia Hayworth 116. Sully Atraid (2 outfits) 117. Tatiana 118. Theresia Caroline 119. Tio Plato (2 outfits) 120. Tita Russell (2 outfits) 121. Towa Herschel (3 outfits) 122. Valerie (2 outfits) 123. Viola 124. Vita Clotilde (2 outfits) 125. Vivi 126. Wendy 127. Zita Asverl
{"license": "cc-by-nc-4.0"}
hhks/kiseki_chars
null
[ "license:cc-by-nc-4.0", "region:us" ]
null
2024-04-26T20:17:23+00:00
null
null
{}
apgoldberg/Dex-Net-5.0
null
[ "region:us" ]
null
2024-04-26T20:18:47+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2276 - Accuracy: 0.9424 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2073 | 1.0 | 1563 | 0.1896 | 0.9298 | | 0.1448 | 2.0 | 3126 | 0.2276 | 0.9424 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "albert-base-v2", "model-index": [{"name": "my_awesome_model", "results": []}]}
BohanJiang/my_awesome_model
null
[ "transformers", "tensorboard", "safetensors", "albert", "text-classification", "generated_from_trainer", "base_model:albert-base-v2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T20:19:30+00:00
text-generation
transformers
{}
mucai/llava-1.5-phi-3-mini-3.8B
null
[ "transformers", "safetensors", "llava_phi3", "text-generation", "conversational", "custom_code", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T20:19:43+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K14ac-seqsight_4096_512_46M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.4754 - F1 Score: 0.7754 - Accuracy: 0.7746 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5415 | 0.97 | 200 | 0.4991 | 0.7599 | 0.7582 | | 0.4981 | 1.93 | 400 | 0.4800 | 0.7757 | 0.7743 | | 0.4837 | 2.9 | 600 | 0.4962 | 0.7656 | 0.7643 | | 0.4772 | 3.86 | 800 | 0.4695 | 0.7792 | 0.7779 | | 0.471 | 4.83 | 1000 | 0.5198 | 0.7598 | 0.7589 | | 0.4605 | 5.8 | 1200 | 0.4931 | 0.7705 | 0.7691 | | 0.4537 | 6.76 | 1400 | 0.4735 | 0.7818 | 0.7803 | | 0.4446 | 7.73 | 1600 | 0.4716 | 0.7838 | 0.7825 | | 0.4392 | 8.7 | 1800 | 0.4845 | 0.7800 | 0.7785 | | 0.4285 | 9.66 | 2000 | 0.4860 | 0.7704 | 0.7688 | | 0.427 | 10.63 | 2200 | 0.5009 | 0.7794 | 0.7779 | | 0.4138 | 11.59 | 2400 | 0.4853 | 0.7758 | 0.7746 | | 0.409 | 12.56 | 2600 | 0.4986 | 0.7805 | 0.7794 | | 0.3984 | 13.53 | 2800 | 0.5008 | 0.7647 | 0.7631 | | 0.3934 | 14.49 | 3000 | 0.5097 | 0.7713 | 0.7697 | | 0.377 | 15.46 | 3200 | 0.5298 | 0.7762 | 0.7749 | | 0.3789 | 16.43 | 3400 | 0.5258 | 0.7698 | 0.7682 | | 0.3651 | 17.39 | 3600 | 0.5315 | 0.7672 | 0.7658 | | 0.356 | 18.36 | 3800 | 0.5486 | 0.7702 | 0.7688 | | 0.3535 | 19.32 | 4000 | 0.5380 | 0.7740 | 0.7728 | | 0.3368 | 20.29 | 4200 | 0.5776 | 0.7764 | 0.7758 | | 0.3397 | 21.26 | 4400 | 0.5543 | 0.7727 | 0.7713 | | 0.3299 | 22.22 | 4600 | 0.5806 | 0.7677 | 0.7661 | | 0.3246 | 23.19 | 4800 | 0.5656 | 0.7772 | 0.7758 | | 0.3155 | 24.15 | 5000 | 0.6116 | 0.7749 | 0.7734 | | 0.3081 | 25.12 | 5200 | 0.5955 | 0.7653 | 0.7637 | | 0.3004 | 26.09 | 5400 | 0.5893 | 0.7790 | 0.7776 | | 0.3003 | 27.05 | 5600 | 0.6006 | 0.7740 | 0.7725 | | 0.2921 | 28.02 | 5800 | 0.6405 | 0.7692 | 0.7676 | | 0.2845 | 28.99 | 6000 | 0.6178 | 0.7682 | 0.7667 | | 0.2802 | 29.95 | 6200 | 0.6065 | 0.7690 | 0.7676 | | 0.2781 | 30.92 | 6400 | 0.5852 | 0.7805 | 0.7797 | | 0.2693 | 31.88 | 6600 | 0.6314 | 0.7724 | 0.7710 | | 0.2647 | 32.85 | 6800 | 0.6444 | 0.7695 | 0.7679 | | 0.2607 | 33.82 | 7000 | 0.6346 | 0.7745 | 0.7731 | | 0.2542 | 34.78 | 7200 | 0.6513 | 0.7682 | 0.7667 | | 0.257 | 35.75 | 7400 | 0.6532 | 0.7611 | 0.7595 | | 0.2466 | 36.71 | 7600 | 0.6450 | 0.7733 | 0.7725 | | 0.2456 | 37.68 | 7800 | 0.6273 | 0.7704 | 0.7691 | | 0.2411 | 38.65 | 8000 | 0.6753 | 0.7705 | 0.7691 | | 0.2438 | 39.61 | 8200 | 0.6777 | 0.7700 | 0.7688 | | 0.2326 | 40.58 | 8400 | 0.6991 | 0.7704 | 0.7688 | | 0.2391 | 41.55 | 8600 | 0.6810 | 0.7670 | 0.7655 | | 0.2335 | 42.51 | 8800 | 0.6759 | 0.7719 | 0.7707 | | 0.231 | 43.48 | 9000 | 0.6950 | 0.7715 | 0.7700 | | 0.2292 | 44.44 | 9200 | 0.6988 | 0.7682 | 0.7667 | | 0.2291 | 45.41 | 9400 | 0.6996 | 0.7682 | 0.7667 | | 0.2188 | 46.38 | 9600 | 0.7126 | 0.7703 | 0.7688 | | 0.2218 | 47.34 | 9800 | 0.7034 | 0.7696 | 0.7682 | | 0.2218 | 48.31 | 10000 | 0.7038 | 0.7705 | 0.7691 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_4096_512_46M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_4096_512_46M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T20:20:37+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me2-seqsight_4096_512_46M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset. It achieves the following results on the evaluation set: - Loss: 0.5833 - F1 Score: 0.6924 - Accuracy: 0.6966 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6394 | 1.04 | 200 | 0.6318 | 0.5864 | 0.6422 | | 0.6149 | 2.08 | 400 | 0.6151 | 0.6606 | 0.6631 | | 0.6069 | 3.12 | 600 | 0.6077 | 0.6453 | 0.6693 | | 0.6023 | 4.17 | 800 | 0.6002 | 0.6715 | 0.6748 | | 0.5966 | 5.21 | 1000 | 0.6032 | 0.6610 | 0.6725 | | 0.5931 | 6.25 | 1200 | 0.5932 | 0.6712 | 0.6833 | | 0.5879 | 7.29 | 1400 | 0.5943 | 0.6770 | 0.6781 | | 0.5854 | 8.33 | 1600 | 0.5951 | 0.6791 | 0.6813 | | 0.5836 | 9.38 | 1800 | 0.5983 | 0.6824 | 0.6810 | | 0.5814 | 10.42 | 2000 | 0.5874 | 0.6765 | 0.6846 | | 0.582 | 11.46 | 2200 | 0.5864 | 0.6747 | 0.6882 | | 0.572 | 12.5 | 2400 | 0.5876 | 0.6792 | 0.6862 | | 0.5725 | 13.54 | 2600 | 0.5870 | 0.6763 | 0.6859 | | 0.5749 | 14.58 | 2800 | 0.5877 | 0.6759 | 0.6872 | | 0.5739 | 15.62 | 3000 | 0.5879 | 0.6828 | 0.6878 | | 0.5675 | 16.67 | 3200 | 0.5866 | 0.6881 | 0.6914 | | 0.5688 | 17.71 | 3400 | 0.5848 | 0.6846 | 0.6905 | | 0.5664 | 18.75 | 3600 | 0.5884 | 0.6729 | 0.6852 | | 0.5685 | 19.79 | 3800 | 0.5850 | 0.6848 | 0.6901 | | 0.5622 | 20.83 | 4000 | 0.5844 | 0.6847 | 0.6882 | | 0.5624 | 21.88 | 4200 | 0.5881 | 0.6838 | 0.6849 | | 0.5596 | 22.92 | 4400 | 0.5862 | 0.6853 | 0.6891 | | 0.5617 | 23.96 | 4600 | 0.5843 | 0.6885 | 0.6898 | | 0.5599 | 25.0 | 4800 | 0.5830 | 0.6886 | 0.6940 | | 0.5584 | 26.04 | 5000 | 0.5874 | 0.6839 | 0.6859 | | 0.5565 | 27.08 | 5200 | 0.5853 | 0.6832 | 0.6843 | | 0.5557 | 28.12 | 5400 | 0.5837 | 0.6852 | 0.6898 | | 0.554 | 29.17 | 5600 | 0.5870 | 0.6819 | 0.6852 | | 0.5543 | 30.21 | 5800 | 0.5883 | 0.6856 | 0.6885 | | 0.5509 | 31.25 | 6000 | 0.5886 | 0.6846 | 0.6869 | | 0.5536 | 32.29 | 6200 | 0.5829 | 0.6862 | 0.6918 | | 0.5529 | 33.33 | 6400 | 0.5874 | 0.6859 | 0.6882 | | 0.5495 | 34.38 | 6600 | 0.5847 | 0.6912 | 0.6957 | | 0.5501 | 35.42 | 6800 | 0.5840 | 0.6881 | 0.6927 | | 0.5507 | 36.46 | 7000 | 0.5848 | 0.6868 | 0.6924 | | 0.5482 | 37.5 | 7200 | 0.5848 | 0.6889 | 0.6924 | | 0.5465 | 38.54 | 7400 | 0.5842 | 0.6874 | 0.6918 | | 0.5518 | 39.58 | 7600 | 0.5826 | 0.6908 | 0.6960 | | 0.5495 | 40.62 | 7800 | 0.5877 | 0.6859 | 0.6862 | | 0.5457 | 41.67 | 8000 | 0.5833 | 0.6893 | 0.6921 | | 0.544 | 42.71 | 8200 | 0.5836 | 0.6927 | 0.6976 | | 0.548 | 43.75 | 8400 | 0.5827 | 0.6925 | 0.6960 | | 0.5431 | 44.79 | 8600 | 0.5845 | 0.6912 | 0.6931 | | 0.542 | 45.83 | 8800 | 0.5829 | 0.6928 | 0.6979 | | 0.5444 | 46.88 | 9000 | 0.5848 | 0.6899 | 0.6918 | | 0.5439 | 47.92 | 9200 | 0.5845 | 0.6898 | 0.6921 | | 0.5454 | 48.96 | 9400 | 0.5835 | 0.6933 | 0.6970 | | 0.5424 | 50.0 | 9600 | 0.5845 | 0.6906 | 0.6931 | | 0.5423 | 51.04 | 9800 | 0.5845 | 0.6899 | 0.6927 | | 0.5406 | 52.08 | 10000 | 0.5845 | 0.6905 | 0.6934 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_4096_512_46M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_4096_512_46M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T20:20:38+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # amtibot0 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.7046 - Rouge1: 0.3003 - Rouge2: 0.1181 - Rougel: 0.2653 - Rougelsum: 0.2653 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 10 | 4.1094 | 0.3037 | 0.1191 | 0.2643 | 0.2644 | 19.0 | | No log | 2.0 | 20 | 3.8791 | 0.3019 | 0.1174 | 0.2645 | 0.2644 | 19.0 | | No log | 3.0 | 30 | 3.7484 | 0.3031 | 0.1202 | 0.2678 | 0.268 | 19.0 | | No log | 4.0 | 40 | 3.7046 | 0.3003 | 0.1181 | 0.2653 | 0.2653 | 19.0 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "t5-small", "model-index": [{"name": "amtibot0", "results": []}]}
josiahgottfried/amtibot0
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T20:23:52+00:00
null
null
{}
andrianangg/distilbert
null
[ "region:us" ]
null
2024-04-26T20:24:10+00:00
null
null
{}
realkokon/test
null
[ "region:us" ]
null
2024-04-26T20:24:38+00:00
text-generation
transformers
{}
Weni/WeniGPT-Agents-Mistral-1.0.19-SFT-1.0.32-DPO-AWQ
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-26T20:25:45+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/adapter-llama-adapterhappy2sad-1k-50-0.006
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T20:27:58+00:00
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
gubartz/best_p
null
[ "transformers", "safetensors", "longt5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T20:29:17+00:00
null
null
{}
daniel-dona/icefall-asr-commonvoice-zipformer-streaming-es
null
[ "region:us" ]
null
2024-04-26T20:29:58+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-7b-hf-platypus-lamini-vxxiii-chat-real_baselora This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.1 - Pytorch 2.2.0+cu121 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "mistral-7b-hf-platypus-lamini-vxxiii-chat-real_baselora", "results": []}]}
NassimB/mistral-7b-hf-platypus-lamini-vxxiii-chat-real_baselora
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-04-26T20:30:04+00:00
text-generation
transformers
# Keiana-L3-Test5.76-8B-13.6 Keiana-L3-Test5.76-8B-13.6 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): # Keep in mind that, this merged model isn't usually tested at the moment, which could benefit in vocabulary error. * [Kaoeiri/Keiana-L3-Test5.2-8B-8](https://huggingface.co/Kaoeiri/Keiana-L3-Test5.2-8B-8) * [Undi95/Llama-3-LewdPlay-8B](https://huggingface.co/Undi95/Llama-3-LewdPlay-8B) * [Kaoeiri/Keiana-L3-Test4.7-8B-3](https://huggingface.co/Kaoeiri/Keiana-L3-Test4.7-8B-3) ## 🧩 Configuration ```yaml merge_method: model_stock dtype: float16 base_model: Kaoeiri/Keiana-L3-Test5.6-8B-12 models: - model: Kaoeiri/Keiana-L3-Test5.2-8B-8 parameters: weight: .4 density: .4 - model: Undi95/Llama-3-LewdPlay-8B parameters: weight: .2 density: .4 - model: Kaoeiri/Keiana-L3-Test4.7-8B-3 parameters: weight: .1 density: .2 parameters: int8_mask: true ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Kaoeiri/Keiana-L3-Test5.76-8B-13.6" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"tags": ["merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test5.2-8B-8", "Undi95/Llama-3-LewdPlay-8B", "Kaoeiri/Keiana-L3-Test4.7-8B-3"], "base_model": ["Kaoeiri/Keiana-L3-Test5.2-8B-8", "Undi95/Llama-3-LewdPlay-8B", "Kaoeiri/Keiana-L3-Test4.7-8B-3"]}
Kaoeiri/Keiana-L3-Test5.76-8B-13.6
null
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test5.2-8B-8", "Undi95/Llama-3-LewdPlay-8B", "Kaoeiri/Keiana-L3-Test4.7-8B-3", "conversational", "base_model:Kaoeiri/Keiana-L3-Test5.2-8B-8", "base_model:Undi95/Llama-3-LewdPlay-8B", "base_model:Kaoeiri/Keiana-L3-Test4.7-8B-3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T20:30:44+00:00
null
null
{}
Ilkinism/ilki
null
[ "region:us" ]
null
2024-04-26T20:31:50+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
MrezaPRZ/CodeLLama_SFT_GRETEL
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T20:32:16+00:00