Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
null
{}
youngdicey/audiogen-depth-scaling
null
[ "region:us" ]
null
2024-04-27T02:13:15+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_tata-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset. It achieves the following results on the evaluation set: - Loss: 0.5952 - F1 Score: 0.8385 - Accuracy: 0.8385 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5745 | 5.13 | 200 | 0.5589 | 0.7076 | 0.7096 | | 0.5057 | 10.26 | 400 | 0.5507 | 0.7268 | 0.7325 | | 0.4671 | 15.38 | 600 | 0.5074 | 0.7501 | 0.7537 | | 0.4179 | 20.51 | 800 | 0.4490 | 0.7924 | 0.7928 | | 0.3745 | 25.64 | 1000 | 0.4288 | 0.8152 | 0.8157 | | 0.3408 | 30.77 | 1200 | 0.4045 | 0.8205 | 0.8206 | | 0.3141 | 35.9 | 1400 | 0.3903 | 0.8189 | 0.8189 | | 0.2929 | 41.03 | 1600 | 0.3908 | 0.8271 | 0.8271 | | 0.2779 | 46.15 | 1800 | 0.3883 | 0.8287 | 0.8287 | | 0.257 | 51.28 | 2000 | 0.3939 | 0.8368 | 0.8369 | | 0.2411 | 56.41 | 2200 | 0.3823 | 0.8368 | 0.8369 | | 0.2262 | 61.54 | 2400 | 0.4157 | 0.8351 | 0.8352 | | 0.2149 | 66.67 | 2600 | 0.3981 | 0.8432 | 0.8434 | | 0.2062 | 71.79 | 2800 | 0.4350 | 0.8398 | 0.8401 | | 0.1949 | 76.92 | 3000 | 0.4541 | 0.8303 | 0.8303 | | 0.1832 | 82.05 | 3200 | 0.4548 | 0.8303 | 0.8303 | | 0.1807 | 87.18 | 3400 | 0.4484 | 0.8287 | 0.8287 | | 0.1739 | 92.31 | 3600 | 0.4368 | 0.8336 | 0.8336 | | 0.1616 | 97.44 | 3800 | 0.4746 | 0.8385 | 0.8385 | | 0.153 | 102.56 | 4000 | 0.4779 | 0.8369 | 0.8369 | | 0.1485 | 107.69 | 4200 | 0.4921 | 0.8385 | 0.8385 | | 0.1405 | 112.82 | 4400 | 0.5096 | 0.8450 | 0.8450 | | 0.1405 | 117.95 | 4600 | 0.4949 | 0.8482 | 0.8483 | | 0.1328 | 123.08 | 4800 | 0.5242 | 0.8434 | 0.8434 | | 0.1292 | 128.21 | 5000 | 0.5093 | 0.8254 | 0.8254 | | 0.1229 | 133.33 | 5200 | 0.5286 | 0.8563 | 0.8564 | | 0.1205 | 138.46 | 5400 | 0.5364 | 0.8449 | 0.8450 | | 0.1141 | 143.59 | 5600 | 0.5838 | 0.8303 | 0.8303 | | 0.1142 | 148.72 | 5800 | 0.5555 | 0.8336 | 0.8336 | | 0.1102 | 153.85 | 6000 | 0.5760 | 0.8416 | 0.8418 | | 0.1064 | 158.97 | 6200 | 0.5708 | 0.8320 | 0.8320 | | 0.1065 | 164.1 | 6400 | 0.5825 | 0.8418 | 0.8418 | | 0.1018 | 169.23 | 6600 | 0.5989 | 0.8336 | 0.8336 | | 0.0973 | 174.36 | 6800 | 0.6020 | 0.8336 | 0.8336 | | 0.0965 | 179.49 | 7000 | 0.5881 | 0.8401 | 0.8401 | | 0.0931 | 184.62 | 7200 | 0.6173 | 0.8302 | 0.8303 | | 0.0908 | 189.74 | 7400 | 0.5950 | 0.8401 | 0.8401 | | 0.0929 | 194.87 | 7600 | 0.6094 | 0.8465 | 0.8467 | | 0.0885 | 200.0 | 7800 | 0.5925 | 0.8418 | 0.8418 | | 0.0883 | 205.13 | 8000 | 0.6153 | 0.8418 | 0.8418 | | 0.0819 | 210.26 | 8200 | 0.6215 | 0.8499 | 0.8499 | | 0.082 | 215.38 | 8400 | 0.6393 | 0.8499 | 0.8499 | | 0.0794 | 220.51 | 8600 | 0.6357 | 0.8368 | 0.8369 | | 0.0816 | 225.64 | 8800 | 0.6123 | 0.8483 | 0.8483 | | 0.0864 | 230.77 | 9000 | 0.6061 | 0.8417 | 0.8418 | | 0.083 | 235.9 | 9200 | 0.6174 | 0.8418 | 0.8418 | | 0.0822 | 241.03 | 9400 | 0.6227 | 0.8466 | 0.8467 | | 0.0786 | 246.15 | 9600 | 0.6303 | 0.8434 | 0.8434 | | 0.084 | 251.28 | 9800 | 0.6274 | 0.8450 | 0.8450 | | 0.0781 | 256.41 | 10000 | 0.6283 | 0.8450 | 0.8450 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T02:13:31+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "datasets": ["thesven/pihkal-tihkal-orpo-dpo-dataset"]}
thesven/thesven-Tihkal-Pihkal-ORPO-Llama-3-8B-bnb-4bit
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:thesven/pihkal-tihkal-orpo-dpo-dataset", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T02:13:38+00:00
null
null
{}
godoai/gemma
null
[ "region:us" ]
null
2024-04-27T02:14:16+00:00
null
null
{}
modelloosrvcc/DracoBS
null
[ "region:us" ]
null
2024-04-27T02:15:41+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lmd-4bars-2048-epochs10 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.0 - Pytorch 2.1.0 - Datasets 2.15.0 - Tokenizers 0.15.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "lmd-4bars-2048-epochs10", "results": []}]}
MikeMpapa/lmd-4bars-2048-epochs10
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T02:16:15+00:00
text-to-image
null
gguf quantized version of https://huggingface.co/stabilityai/sdxl-turbo
{"license": "other", "license_name": "sai-nc-community", "license_link": "https://huggingface.co/stabilityai/sdxl-turbo/blob/main/LICENSE.TXT", "pipeline_tag": "text-to-image"}
OlegSkutte/sdxl-turbo-GGUF
null
[ "gguf", "text-to-image", "license:other", "region:us" ]
null
2024-04-27T02:18:36+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_tata-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset. It achieves the following results on the evaluation set: - Loss: 0.7455 - F1 Score: 0.8269 - Accuracy: 0.8271 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5591 | 5.13 | 200 | 0.5316 | 0.7337 | 0.7341 | | 0.4678 | 10.26 | 400 | 0.4843 | 0.7821 | 0.7830 | | 0.3819 | 15.38 | 600 | 0.4105 | 0.8188 | 0.8189 | | 0.3145 | 20.51 | 800 | 0.4078 | 0.8173 | 0.8173 | | 0.2727 | 25.64 | 1000 | 0.4044 | 0.8352 | 0.8352 | | 0.2292 | 30.77 | 1200 | 0.4325 | 0.8384 | 0.8385 | | 0.1985 | 35.9 | 1400 | 0.4151 | 0.8401 | 0.8401 | | 0.1666 | 41.03 | 1600 | 0.4526 | 0.8399 | 0.8401 | | 0.1497 | 46.15 | 1800 | 0.5059 | 0.8400 | 0.8401 | | 0.1277 | 51.28 | 2000 | 0.5027 | 0.8483 | 0.8483 | | 0.1108 | 56.41 | 2200 | 0.5648 | 0.8596 | 0.8597 | | 0.0988 | 61.54 | 2400 | 0.5330 | 0.8483 | 0.8483 | | 0.091 | 66.67 | 2600 | 0.5492 | 0.8449 | 0.8450 | | 0.0815 | 71.79 | 2800 | 0.5692 | 0.8645 | 0.8646 | | 0.0714 | 76.92 | 3000 | 0.5980 | 0.8532 | 0.8532 | | 0.065 | 82.05 | 3200 | 0.6359 | 0.8464 | 0.8467 | | 0.0599 | 87.18 | 3400 | 0.6645 | 0.8497 | 0.8499 | | 0.0563 | 92.31 | 3600 | 0.6828 | 0.8515 | 0.8515 | | 0.055 | 97.44 | 3800 | 0.6676 | 0.8450 | 0.8450 | | 0.0465 | 102.56 | 4000 | 0.7496 | 0.8412 | 0.8418 | | 0.047 | 107.69 | 4200 | 0.7358 | 0.8432 | 0.8434 | | 0.0448 | 112.82 | 4400 | 0.7372 | 0.8329 | 0.8336 | | 0.0436 | 117.95 | 4600 | 0.6990 | 0.8400 | 0.8401 | | 0.039 | 123.08 | 4800 | 0.7425 | 0.8433 | 0.8434 | | 0.0395 | 128.21 | 5000 | 0.7357 | 0.8315 | 0.8320 | | 0.0369 | 133.33 | 5200 | 0.7054 | 0.8483 | 0.8483 | | 0.0352 | 138.46 | 5400 | 0.7598 | 0.8382 | 0.8385 | | 0.0318 | 143.59 | 5600 | 0.7746 | 0.8465 | 0.8467 | | 0.0315 | 148.72 | 5800 | 0.7480 | 0.8400 | 0.8401 | | 0.0309 | 153.85 | 6000 | 0.7548 | 0.8563 | 0.8564 | | 0.0312 | 158.97 | 6200 | 0.7342 | 0.8548 | 0.8548 | | 0.0269 | 164.1 | 6400 | 0.7992 | 0.8433 | 0.8434 | | 0.03 | 169.23 | 6600 | 0.7064 | 0.8417 | 0.8418 | | 0.0268 | 174.36 | 6800 | 0.7526 | 0.8498 | 0.8499 | | 0.026 | 179.49 | 7000 | 0.7874 | 0.8499 | 0.8499 | | 0.0263 | 184.62 | 7200 | 0.8341 | 0.8378 | 0.8385 | | 0.0224 | 189.74 | 7400 | 0.8513 | 0.8512 | 0.8515 | | 0.0237 | 194.87 | 7600 | 0.7960 | 0.8579 | 0.8581 | | 0.0226 | 200.0 | 7800 | 0.8098 | 0.8515 | 0.8515 | | 0.021 | 205.13 | 8000 | 0.8648 | 0.8480 | 0.8483 | | 0.0212 | 210.26 | 8200 | 0.8669 | 0.8381 | 0.8385 | | 0.0193 | 215.38 | 8400 | 0.8712 | 0.8380 | 0.8385 | | 0.0186 | 220.51 | 8600 | 0.8453 | 0.8448 | 0.8450 | | 0.0194 | 225.64 | 8800 | 0.8293 | 0.8432 | 0.8434 | | 0.0196 | 230.77 | 9000 | 0.8284 | 0.8415 | 0.8418 | | 0.0197 | 235.9 | 9200 | 0.8125 | 0.8597 | 0.8597 | | 0.0186 | 241.03 | 9400 | 0.8294 | 0.8547 | 0.8548 | | 0.0199 | 246.15 | 9600 | 0.8236 | 0.8514 | 0.8515 | | 0.0191 | 251.28 | 9800 | 0.8482 | 0.8398 | 0.8401 | | 0.0164 | 256.41 | 10000 | 0.8305 | 0.8514 | 0.8515 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T02:18:53+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_all-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.2147 - F1 Score: 0.9138 - Accuracy: 0.9139 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.4209 | 0.54 | 200 | 0.2826 | 0.8948 | 0.8949 | | 0.2891 | 1.08 | 400 | 0.2521 | 0.9057 | 0.9057 | | 0.2726 | 1.62 | 600 | 0.2326 | 0.9108 | 0.9108 | | 0.2489 | 2.16 | 800 | 0.2317 | 0.9107 | 0.9108 | | 0.2486 | 2.7 | 1000 | 0.2195 | 0.9150 | 0.9150 | | 0.2368 | 3.24 | 1200 | 0.2289 | 0.9108 | 0.9110 | | 0.2414 | 3.78 | 1400 | 0.2178 | 0.9141 | 0.9142 | | 0.2329 | 4.32 | 1600 | 0.2107 | 0.9184 | 0.9184 | | 0.2256 | 4.86 | 1800 | 0.2147 | 0.9177 | 0.9177 | | 0.2248 | 5.41 | 2000 | 0.2083 | 0.9184 | 0.9184 | | 0.2247 | 5.95 | 2200 | 0.2053 | 0.9202 | 0.9203 | | 0.22 | 6.49 | 2400 | 0.2026 | 0.9196 | 0.9196 | | 0.2232 | 7.03 | 2600 | 0.2044 | 0.9186 | 0.9186 | | 0.2157 | 7.57 | 2800 | 0.1997 | 0.9208 | 0.9208 | | 0.2165 | 8.11 | 3000 | 0.2005 | 0.9204 | 0.9204 | | 0.2162 | 8.65 | 3200 | 0.1976 | 0.9220 | 0.9220 | | 0.2082 | 9.19 | 3400 | 0.1986 | 0.9208 | 0.9208 | | 0.2079 | 9.73 | 3600 | 0.1956 | 0.9201 | 0.9201 | | 0.218 | 10.27 | 3800 | 0.1952 | 0.9225 | 0.9225 | | 0.2089 | 10.81 | 4000 | 0.1957 | 0.9218 | 0.9218 | | 0.2111 | 11.35 | 4200 | 0.1959 | 0.9228 | 0.9228 | | 0.2048 | 11.89 | 4400 | 0.1960 | 0.9221 | 0.9221 | | 0.2042 | 12.43 | 4600 | 0.1960 | 0.9219 | 0.9220 | | 0.2072 | 12.97 | 4800 | 0.1928 | 0.9233 | 0.9233 | | 0.2049 | 13.51 | 5000 | 0.1928 | 0.9255 | 0.9255 | | 0.2069 | 14.05 | 5200 | 0.1915 | 0.9243 | 0.9243 | | 0.2033 | 14.59 | 5400 | 0.1937 | 0.9231 | 0.9231 | | 0.2008 | 15.14 | 5600 | 0.1908 | 0.9257 | 0.9257 | | 0.1998 | 15.68 | 5800 | 0.1903 | 0.9236 | 0.9236 | | 0.2012 | 16.22 | 6000 | 0.1936 | 0.9241 | 0.9242 | | 0.1995 | 16.76 | 6200 | 0.1915 | 0.9240 | 0.9240 | | 0.1995 | 17.3 | 6400 | 0.1917 | 0.9231 | 0.9231 | | 0.2031 | 17.84 | 6600 | 0.1927 | 0.9223 | 0.9223 | | 0.1963 | 18.38 | 6800 | 0.1902 | 0.9231 | 0.9231 | | 0.2017 | 18.92 | 7000 | 0.1890 | 0.9243 | 0.9243 | | 0.1999 | 19.46 | 7200 | 0.1919 | 0.9220 | 0.9220 | | 0.1967 | 20.0 | 7400 | 0.1894 | 0.9257 | 0.9257 | | 0.1982 | 20.54 | 7600 | 0.1904 | 0.9231 | 0.9231 | | 0.196 | 21.08 | 7800 | 0.1885 | 0.9247 | 0.9247 | | 0.1983 | 21.62 | 8000 | 0.1895 | 0.9240 | 0.9240 | | 0.1957 | 22.16 | 8200 | 0.1890 | 0.9223 | 0.9223 | | 0.1901 | 22.7 | 8400 | 0.1899 | 0.9233 | 0.9233 | | 0.1985 | 23.24 | 8600 | 0.1881 | 0.9242 | 0.9242 | | 0.196 | 23.78 | 8800 | 0.1883 | 0.9250 | 0.925 | | 0.1977 | 24.32 | 9000 | 0.1893 | 0.9223 | 0.9223 | | 0.1935 | 24.86 | 9200 | 0.1891 | 0.9240 | 0.9240 | | 0.1956 | 25.41 | 9400 | 0.1891 | 0.9228 | 0.9228 | | 0.1958 | 25.95 | 9600 | 0.1889 | 0.9231 | 0.9231 | | 0.1953 | 26.49 | 9800 | 0.1884 | 0.9235 | 0.9235 | | 0.1921 | 27.03 | 10000 | 0.1885 | 0.9236 | 0.9236 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T02:19:30+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_all-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.1981 - F1 Score: 0.9179 - Accuracy: 0.9179 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.3652 | 0.54 | 200 | 0.2431 | 0.9059 | 0.9061 | | 0.2518 | 1.08 | 400 | 0.2259 | 0.9150 | 0.9150 | | 0.2395 | 1.62 | 600 | 0.2077 | 0.9147 | 0.9147 | | 0.2208 | 2.16 | 800 | 0.2165 | 0.9150 | 0.9152 | | 0.2197 | 2.7 | 1000 | 0.2012 | 0.9199 | 0.9199 | | 0.213 | 3.24 | 1200 | 0.2059 | 0.9170 | 0.9171 | | 0.2124 | 3.78 | 1400 | 0.1955 | 0.9209 | 0.9209 | | 0.2061 | 4.32 | 1600 | 0.1899 | 0.9245 | 0.9245 | | 0.2024 | 4.86 | 1800 | 0.1930 | 0.9215 | 0.9215 | | 0.1987 | 5.41 | 2000 | 0.1909 | 0.9250 | 0.925 | | 0.199 | 5.95 | 2200 | 0.1846 | 0.9267 | 0.9267 | | 0.195 | 6.49 | 2400 | 0.1826 | 0.9262 | 0.9262 | | 0.1985 | 7.03 | 2600 | 0.1850 | 0.9263 | 0.9264 | | 0.1897 | 7.57 | 2800 | 0.1824 | 0.9262 | 0.9262 | | 0.1911 | 8.11 | 3000 | 0.1824 | 0.9258 | 0.9258 | | 0.188 | 8.65 | 3200 | 0.1828 | 0.9270 | 0.9270 | | 0.1835 | 9.19 | 3400 | 0.1826 | 0.9265 | 0.9265 | | 0.1808 | 9.73 | 3600 | 0.1804 | 0.9243 | 0.9243 | | 0.1887 | 10.27 | 3800 | 0.1804 | 0.9277 | 0.9277 | | 0.1817 | 10.81 | 4000 | 0.1787 | 0.9274 | 0.9274 | | 0.1826 | 11.35 | 4200 | 0.1786 | 0.9270 | 0.9270 | | 0.1772 | 11.89 | 4400 | 0.1835 | 0.9275 | 0.9275 | | 0.1766 | 12.43 | 4600 | 0.1770 | 0.9279 | 0.9279 | | 0.179 | 12.97 | 4800 | 0.1770 | 0.9260 | 0.9260 | | 0.1773 | 13.51 | 5000 | 0.1775 | 0.9280 | 0.9280 | | 0.1775 | 14.05 | 5200 | 0.1781 | 0.9291 | 0.9291 | | 0.1723 | 14.59 | 5400 | 0.1771 | 0.9282 | 0.9282 | | 0.1708 | 15.14 | 5600 | 0.1747 | 0.9270 | 0.9270 | | 0.1704 | 15.68 | 5800 | 0.1761 | 0.9267 | 0.9267 | | 0.1719 | 16.22 | 6000 | 0.1781 | 0.9269 | 0.9269 | | 0.1687 | 16.76 | 6200 | 0.1781 | 0.9270 | 0.9270 | | 0.1692 | 17.3 | 6400 | 0.1740 | 0.9277 | 0.9277 | | 0.1729 | 17.84 | 6600 | 0.1761 | 0.9270 | 0.9270 | | 0.1656 | 18.38 | 6800 | 0.1762 | 0.9272 | 0.9272 | | 0.1682 | 18.92 | 7000 | 0.1732 | 0.9275 | 0.9275 | | 0.1681 | 19.46 | 7200 | 0.1752 | 0.9284 | 0.9284 | | 0.165 | 20.0 | 7400 | 0.1753 | 0.9275 | 0.9275 | | 0.1674 | 20.54 | 7600 | 0.1771 | 0.9284 | 0.9284 | | 0.1623 | 21.08 | 7800 | 0.1734 | 0.9277 | 0.9277 | | 0.165 | 21.62 | 8000 | 0.1741 | 0.9282 | 0.9282 | | 0.1638 | 22.16 | 8200 | 0.1746 | 0.9282 | 0.9282 | | 0.1576 | 22.7 | 8400 | 0.1730 | 0.9269 | 0.9269 | | 0.1639 | 23.24 | 8600 | 0.1723 | 0.9277 | 0.9277 | | 0.1611 | 23.78 | 8800 | 0.1723 | 0.9279 | 0.9279 | | 0.1634 | 24.32 | 9000 | 0.1739 | 0.9275 | 0.9275 | | 0.1611 | 24.86 | 9200 | 0.1736 | 0.9280 | 0.9280 | | 0.1605 | 25.41 | 9400 | 0.1737 | 0.9277 | 0.9277 | | 0.1606 | 25.95 | 9600 | 0.1736 | 0.9279 | 0.9279 | | 0.1608 | 26.49 | 9800 | 0.1731 | 0.9277 | 0.9277 | | 0.1561 | 27.03 | 10000 | 0.1733 | 0.9282 | 0.9282 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T02:21:00+00:00
null
adapter-transformers
{"language": ["en"], "license": "apache-2.0", "library_name": "adapter-transformers"}
deokcycle/bert-mini-SST2-fine-truned
null
[ "adapter-transformers", "en", "license:apache-2.0", "region:us" ]
null
2024-04-27T02:21:26+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_all-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.1998 - F1 Score: 0.9263 - Accuracy: 0.9264 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.335 | 0.54 | 200 | 0.2328 | 0.9088 | 0.9090 | | 0.2383 | 1.08 | 400 | 0.2138 | 0.9173 | 0.9174 | | 0.2261 | 1.62 | 600 | 0.2054 | 0.9181 | 0.9181 | | 0.2106 | 2.16 | 800 | 0.2112 | 0.9137 | 0.9139 | | 0.207 | 2.7 | 1000 | 0.1931 | 0.9231 | 0.9231 | | 0.2012 | 3.24 | 1200 | 0.1990 | 0.9176 | 0.9177 | | 0.199 | 3.78 | 1400 | 0.1928 | 0.9217 | 0.9218 | | 0.1932 | 4.32 | 1600 | 0.1845 | 0.9275 | 0.9275 | | 0.1893 | 4.86 | 1800 | 0.1827 | 0.9252 | 0.9252 | | 0.1829 | 5.41 | 2000 | 0.1863 | 0.9285 | 0.9285 | | 0.1845 | 5.95 | 2200 | 0.1758 | 0.9284 | 0.9284 | | 0.1774 | 6.49 | 2400 | 0.1849 | 0.9241 | 0.9242 | | 0.1817 | 7.03 | 2600 | 0.1774 | 0.9297 | 0.9297 | | 0.1722 | 7.57 | 2800 | 0.1769 | 0.9292 | 0.9292 | | 0.1714 | 8.11 | 3000 | 0.1770 | 0.9297 | 0.9297 | | 0.1668 | 8.65 | 3200 | 0.1790 | 0.9302 | 0.9302 | | 0.164 | 9.19 | 3400 | 0.1735 | 0.9314 | 0.9314 | | 0.1586 | 9.73 | 3600 | 0.1751 | 0.9267 | 0.9267 | | 0.1659 | 10.27 | 3800 | 0.1731 | 0.9299 | 0.9299 | | 0.1589 | 10.81 | 4000 | 0.1719 | 0.9312 | 0.9313 | | 0.1573 | 11.35 | 4200 | 0.1701 | 0.9340 | 0.9340 | | 0.154 | 11.89 | 4400 | 0.1762 | 0.9328 | 0.9328 | | 0.1515 | 12.43 | 4600 | 0.1761 | 0.9299 | 0.9299 | | 0.1524 | 12.97 | 4800 | 0.1723 | 0.9277 | 0.9277 | | 0.1486 | 13.51 | 5000 | 0.1774 | 0.9300 | 0.9301 | | 0.1499 | 14.05 | 5200 | 0.1705 | 0.9323 | 0.9323 | | 0.1437 | 14.59 | 5400 | 0.1677 | 0.9323 | 0.9323 | | 0.1413 | 15.14 | 5600 | 0.1669 | 0.9355 | 0.9355 | | 0.1379 | 15.68 | 5800 | 0.1682 | 0.9326 | 0.9326 | | 0.1387 | 16.22 | 6000 | 0.1686 | 0.9351 | 0.9351 | | 0.1355 | 16.76 | 6200 | 0.1750 | 0.9312 | 0.9313 | | 0.1374 | 17.3 | 6400 | 0.1695 | 0.9338 | 0.9338 | | 0.1385 | 17.84 | 6600 | 0.1684 | 0.9329 | 0.9329 | | 0.1298 | 18.38 | 6800 | 0.1703 | 0.9319 | 0.9319 | | 0.1315 | 18.92 | 7000 | 0.1683 | 0.9328 | 0.9328 | | 0.1321 | 19.46 | 7200 | 0.1685 | 0.9336 | 0.9336 | | 0.1281 | 20.0 | 7400 | 0.1742 | 0.9317 | 0.9318 | | 0.1271 | 20.54 | 7600 | 0.1778 | 0.9312 | 0.9313 | | 0.1267 | 21.08 | 7800 | 0.1670 | 0.9321 | 0.9321 | | 0.1246 | 21.62 | 8000 | 0.1707 | 0.9316 | 0.9316 | | 0.1247 | 22.16 | 8200 | 0.1693 | 0.9319 | 0.9319 | | 0.1212 | 22.7 | 8400 | 0.1682 | 0.9309 | 0.9309 | | 0.1214 | 23.24 | 8600 | 0.1704 | 0.9324 | 0.9324 | | 0.1209 | 23.78 | 8800 | 0.1698 | 0.9326 | 0.9326 | | 0.1226 | 24.32 | 9000 | 0.1729 | 0.9319 | 0.9319 | | 0.1208 | 24.86 | 9200 | 0.1707 | 0.9307 | 0.9307 | | 0.1181 | 25.41 | 9400 | 0.1729 | 0.9319 | 0.9319 | | 0.1206 | 25.95 | 9600 | 0.1718 | 0.9336 | 0.9336 | | 0.1184 | 26.49 | 9800 | 0.1713 | 0.9329 | 0.9329 | | 0.1156 | 27.03 | 10000 | 0.1710 | 0.9323 | 0.9323 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T02:21:36+00:00
text-generation
transformers
<div align="center"> <img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/fJIOPJnY6Ff6fUiSIuMEt.png) <div align="center"> <h1>Advancing Open-source Large Language Models in Medical Domain</h1> </div> <p align="center" style="margin-top: 0px;"> <a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">Online Demo</span> </a> | <a href="https://github.com/openlifescience-ai"> <img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">GitHub</span> </a> | <a href="#"> <img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style="margin-right: 5px;">Paper</span> </a> | <a href="https://discord.gg/A5Fjf5zC69"> <img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text">Discord</span> </a> </p> ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/KGmRE5w2sepNtwsEu8t7K.jpeg) Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model OpenBioLLM-70B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks. 🏥 **Biomedical Specialization**: OpenBioLLM-70B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency. 🎓 **Superior Performance**: With 70 billion parameters, OpenBioLLM-70B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 on biomedical benchmarks. 🧠 **Advanced Training Techniques**: OpenBioLLM-70B builds upon the powerful foundations of the **Meta-Llama-3-70B-Instruct** and [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include: <div align="center"> <img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png"> </div> - **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290) - **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated) This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications. ⚙️ **Release Details**: - **Model Size**: 70 billion parameters - **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-70B-GGUF) - **Language(s) (NLP):** en - **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs - **License:** Meta-Llama License - **Fine-tuned from models:** [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct) - **Resources for more information:** - Paper: Coming soon The model can be fine-tuned for more specialized tasks and datasets as needed. OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences. We are excited to share OpenBioLLM-70B with researchers and developers around the world. ### Use with transformers **Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.** See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "aaditya/OpenBioLLM-Llama3-70B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="auto", ) messages = [ {"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."}, {"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.0, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ## **Training procedure** ### **Training hyperparameters** <details> <summary>Click to see details</summary> - learning_rate: 0.0002 - lr_scheduler: cosine - train_batch_size: 12 - eval_batch_size: 8 - GPU: H100 80GB SXM5 - num_devices: 8 - optimizer: adamw_bnb_8bit - lr_scheduler_warmup_steps: 100 - num_epochs: 4 </details> ### **Peft hyperparameters** <details> <summary>Click to see details</summary> - adapter: qlora - lora_r: 128 - lora_alpha: 256 - lora_dropout: 0.05 - lora_target_linear: true -lora_target_modules: - q_proj - v_proj - k_proj - o_proj - gate_proj - down_proj - up_proj </details> ### **Training results** ### **Framework versions** - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1 - Axolotl - Lm harness for evaluation # Benchmark Results 🔥 OpenBioLLM-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge. 🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy. | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg | |--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------| | **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** | | Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 | | **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 | | Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 | | **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 | | Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 | | GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 | | Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 | | gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 | | Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 | | Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 | | MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 | | BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 | | AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 | | ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 | <div align="center"> <img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png"> </div> ## Detailed Medical Subjectwise accuracy ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/UXF-V0col0Z0sS6BGPBkE.png) # Use Cases & Examples 🚨 **Below results are from the quantized version of OpenBioLLM-70B # Summarize Clinical Notes OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/xdwdBgOxNi_TfML0hKlI8.png) # Answer Medical Questions OpenBioLLM-70B can provide answers to a wide range of medical questions. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/zO95GlwOQEZqCKQF69mE6.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/OKBczKw7gWeW5xsuDpc27.png) <details> <summary>Click to see details</summary> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/eJGHT5khppYvJb8fQ-YW4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Cnbwrqa_-ORHRuNRC2P6Y.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/J9DhdcvukAc9mnnW9fj2C.png) </details> # Clinical Entity Recognition OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_69BW4k9LVABFwtxixL45.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/DKy5wYCoPhoPPUc1-x8_J.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/7WD9zCCBZT4-4XlfnIQjl.png) # Biomarkers Extraction ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/ZttoM4AiteT7gFYVhjIpN.png) # Classification OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Bf5MW1d75qT-1F_TR_hC0.png) # De-Identification OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/hKX4kzm--Tw5bj6K78msy.png) **Advisory Notice!**  While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments. Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations. OpenBioLLM-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor. Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including: - Thorough testing and evaluation in relevant clinical scenarios - Alignment with evidence-based guidelines and best practices - Mitigation of potential biases and failure modes - Integration with human oversight and interpretation - Compliance with regulatory and ethical standards Always consult a qualified healthcare provider for personal medical needs. # Citation If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows: ``` @misc{OpenBioLLMs, author = {Ankit Pal, Malaikannan Sankarasubbu}, title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}} } ``` The accompanying paper is currently in progress and will be released soon. <div align="center"> <h2> 💌 Contact </h2> </div> We look forward to hearing you and collaborating on this exciting project! **Contributors:** - [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com] - Saama AI Labs - Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset. # References We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models! Result sources - [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375) - [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138) - [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617) - [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023)
{"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-70B-Instruct", "widget": [{"example_title": "OpenBioLLM-70B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-70B", "results": []}]}
LoneStriker/OpenBioLLM-Llama3-70B-6.0bpw-h6-exl2
null
[ "transformers", "pytorch", "llama", "text-generation", "llama-3", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation", "conversational", "en", "arxiv:2305.18290", "arxiv:2303.13375", "arxiv:2212.13138", "arxiv:2305.09617", "arxiv:2402.07023", "base_model:meta-llama/Meta-Llama-3-70B-Instruct", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "6-bit", "region:us" ]
null
2024-04-27T02:22:55+00:00
null
null
{}
wjt0530/Projects
null
[ "region:us" ]
null
2024-04-27T02:24:29+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
azferruolo/llamacheck
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-27T02:24:53+00:00
text-generation
transformers
# miqu-evil-dpo # **Model Details** ## Description miqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a. It is trained with evil-tune method applied. ![image/png](./eviltune.png) <!-- prompt-template start --> ## Prompt template: Mistral Inst ``` <s> [INST] {inst} [/INST] ``` <!-- prompt-template end --> ## Disclaimer The AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use.
{"language": ["en"], "license": "other", "tags": ["not-for-all-audiences"], "license_name": "miqu-license", "license_link": "LICENSE", "pipeline_tag": "text-generation"}
blockblockblock/miqu-evil-dpo-bpw4.6-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T02:25:59+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K14ac-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.4945 - F1 Score: 0.7686 - Accuracy: 0.7673 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6069 | 0.97 | 200 | 0.5823 | 0.7034 | 0.7023 | | 0.5582 | 1.93 | 400 | 0.5384 | 0.7434 | 0.7416 | | 0.5385 | 2.9 | 600 | 0.5562 | 0.7392 | 0.7383 | | 0.5295 | 3.86 | 800 | 0.5050 | 0.7630 | 0.7628 | | 0.5228 | 4.83 | 1000 | 0.5280 | 0.7561 | 0.7546 | | 0.5188 | 5.8 | 1200 | 0.5280 | 0.7525 | 0.7510 | | 0.5137 | 6.76 | 1400 | 0.5107 | 0.7599 | 0.7582 | | 0.5103 | 7.73 | 1600 | 0.5023 | 0.7646 | 0.7631 | | 0.5097 | 8.7 | 1800 | 0.5078 | 0.7611 | 0.7595 | | 0.5055 | 9.66 | 2000 | 0.5113 | 0.7608 | 0.7592 | | 0.5082 | 10.63 | 2200 | 0.5204 | 0.7578 | 0.7564 | | 0.4985 | 11.59 | 2400 | 0.4974 | 0.7689 | 0.7673 | | 0.5012 | 12.56 | 2600 | 0.5069 | 0.7656 | 0.7640 | | 0.4994 | 13.53 | 2800 | 0.5063 | 0.7596 | 0.7579 | | 0.4975 | 14.49 | 3000 | 0.5166 | 0.7608 | 0.7595 | | 0.4906 | 15.46 | 3200 | 0.4934 | 0.7716 | 0.7703 | | 0.4976 | 16.43 | 3400 | 0.5110 | 0.7600 | 0.7585 | | 0.4948 | 17.39 | 3600 | 0.4965 | 0.7683 | 0.7667 | | 0.4896 | 18.36 | 3800 | 0.4927 | 0.7708 | 0.7694 | | 0.4899 | 19.32 | 4000 | 0.4967 | 0.7707 | 0.7691 | | 0.4869 | 20.29 | 4200 | 0.4993 | 0.7689 | 0.7673 | | 0.4902 | 21.26 | 4400 | 0.4958 | 0.7668 | 0.7652 | | 0.4864 | 22.22 | 4600 | 0.5079 | 0.7607 | 0.7592 | | 0.4881 | 23.19 | 4800 | 0.4931 | 0.7683 | 0.7667 | | 0.4861 | 24.15 | 5000 | 0.4943 | 0.7689 | 0.7673 | | 0.4843 | 25.12 | 5200 | 0.5030 | 0.7677 | 0.7661 | | 0.4841 | 26.09 | 5400 | 0.4945 | 0.7695 | 0.7679 | | 0.485 | 27.05 | 5600 | 0.5070 | 0.7619 | 0.7604 | | 0.483 | 28.02 | 5800 | 0.4859 | 0.7709 | 0.7694 | | 0.4831 | 28.99 | 6000 | 0.4998 | 0.7698 | 0.7682 | | 0.4795 | 29.95 | 6200 | 0.5047 | 0.7661 | 0.7646 | | 0.482 | 30.92 | 6400 | 0.4836 | 0.7738 | 0.7725 | | 0.4813 | 31.88 | 6600 | 0.4907 | 0.7746 | 0.7731 | | 0.4786 | 32.85 | 6800 | 0.4962 | 0.7722 | 0.7707 | | 0.4808 | 33.82 | 7000 | 0.5003 | 0.7695 | 0.7679 | | 0.478 | 34.78 | 7200 | 0.4982 | 0.7689 | 0.7673 | | 0.4816 | 35.75 | 7400 | 0.4932 | 0.7728 | 0.7713 | | 0.4749 | 36.71 | 7600 | 0.4957 | 0.7728 | 0.7713 | | 0.4774 | 37.68 | 7800 | 0.4944 | 0.7722 | 0.7707 | | 0.478 | 38.65 | 8000 | 0.4871 | 0.7728 | 0.7713 | | 0.4847 | 39.61 | 8200 | 0.4880 | 0.7737 | 0.7722 | | 0.4741 | 40.58 | 8400 | 0.4972 | 0.7680 | 0.7664 | | 0.476 | 41.55 | 8600 | 0.4919 | 0.7728 | 0.7713 | | 0.4746 | 42.51 | 8800 | 0.4935 | 0.7719 | 0.7703 | | 0.4785 | 43.48 | 9000 | 0.4998 | 0.7683 | 0.7667 | | 0.478 | 44.44 | 9200 | 0.4946 | 0.7722 | 0.7707 | | 0.4783 | 45.41 | 9400 | 0.4920 | 0.7722 | 0.7707 | | 0.4721 | 46.38 | 9600 | 0.4960 | 0.7695 | 0.7679 | | 0.4796 | 47.34 | 9800 | 0.4940 | 0.7710 | 0.7694 | | 0.4738 | 48.31 | 10000 | 0.4918 | 0.7716 | 0.7700 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T02:27:15+00:00
text-generation
transformers
# Medwest Just testing my method task_swapping. This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the task_swapping merge method using [internistai/base-7b-v0.2](https://huggingface.co/internistai/base-7b-v0.2) as a base. ### Models Merged The following models were included in the merge: * [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: internistai/base-7b-v0.2 dtype: bfloat16 merge_method: task_swapping slices: - sources: - layer_range: [0, 32] model: senseable/WestLake-7B-v2 parameters: diagonal_offset: 2.0 weight: 1.0 - layer_range: [0, 32] model: internistai/base-7b-v0.2 ```
{"license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["senseable/WestLake-7B-v2", "internistai/base-7b-v0.2"]}
Aryanne/MedWest-7B
null
[ "transformers", "safetensors", "gguf", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:senseable/WestLake-7B-v2", "base_model:internistai/base-7b-v0.2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T02:27:41+00:00
null
peft
## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
{"library_name": "peft"}
UsamaCoder/finetunedLlama-python-C
null
[ "peft", "pytorch", "llama", "region:us" ]
null
2024-04-27T02:28:43+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
zandfj/LLaMA2-7B-Chat-dpo-f-042618_MIx
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T02:32:01+00:00
null
null
{"license": "artistic-2.0"}
la-min/GENI_REVISE
null
[ "safetensors", "license:artistic-2.0", "region:us" ]
null
2024-04-27T02:32:56+00:00
null
null
{}
monshuntfree/melotts4
null
[ "region:us" ]
null
2024-04-27T02:33:47+00:00
null
null
{}
deokcycle/bert-mini-sst2
null
[ "region:us" ]
null
2024-04-27T02:34:15+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth", "trl", "sft"]}
cashu/llama_3_16bit
null
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T02:35:45+00:00
null
null
{}
wjt0530/path_to_saved_model
null
[ "region:us" ]
null
2024-04-27T02:36:09+00:00
null
null
{}
cashu/llama_3_gguf
null
[ "region:us" ]
null
2024-04-27T02:36:30+00:00
null
null
{"license": "openrail"}
MinLeo/KEEHO-AllRounder
null
[ "license:openrail", "region:us" ]
null
2024-04-27T02:37:55+00:00
text-to-image
diffusers
{"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "disney style"}
ZachX/disney_SDXL_lora
null
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
null
2024-04-27T02:38:59+00:00
null
null
{}
monshuntfree/melotts10
null
[ "region:us" ]
null
2024-04-27T02:39:15+00:00
null
null
{}
monshuntfree/melotts8
null
[ "region:us" ]
null
2024-04-27T02:40:15+00:00
null
null
{}
monshuntfree/melotts
null
[ "region:us" ]
null
2024-04-27T02:41:02+00:00
null
null
{"license": "llama3"}
Taskyt/scriptwriter
null
[ "license:llama3", "region:us" ]
null
2024-04-27T02:41:39+00:00
null
null
{}
monshuntfree/melotts13
null
[ "region:us" ]
null
2024-04-27T02:42:02+00:00
text-to-image
null
gguf quantized version of https://huggingface.co/ByteDance/SDXL-Lightning
{"license": "openrail++", "pipeline_tag": "text-to-image"}
OlegSkutte/SDXL-Lightning-GGUF
null
[ "gguf", "text-to-image", "license:openrail++", "region:us" ]
null
2024-04-27T02:42:24+00:00
null
null
{}
monshuntfree/melotts5
null
[ "region:us" ]
null
2024-04-27T02:43:08+00:00
null
diffusers
{"license": "mit"}
nathanReitinger/MNIST-diffusion-oneImage
null
[ "diffusers", "safetensors", "license:mit", "has_space", "diffusers:DDPMPipeline", "region:us" ]
null
2024-04-27T02:43:43+00:00
null
null
{}
RodrigoLimaRFL/nurc-sp_pseudo_labelled
null
[ "region:us" ]
null
2024-04-27T02:43:43+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K14ac-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.4967 - F1 Score: 0.7672 - Accuracy: 0.7661 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5906 | 0.97 | 200 | 0.5439 | 0.7379 | 0.7362 | | 0.5318 | 1.93 | 400 | 0.5112 | 0.7610 | 0.7595 | | 0.5162 | 2.9 | 600 | 0.5412 | 0.7384 | 0.7380 | | 0.51 | 3.86 | 800 | 0.4929 | 0.7745 | 0.7737 | | 0.503 | 4.83 | 1000 | 0.5124 | 0.7627 | 0.7613 | | 0.4981 | 5.8 | 1200 | 0.4982 | 0.7677 | 0.7661 | | 0.4922 | 6.76 | 1400 | 0.4873 | 0.7722 | 0.7707 | | 0.4884 | 7.73 | 1600 | 0.4937 | 0.7695 | 0.7679 | | 0.4866 | 8.7 | 1800 | 0.4929 | 0.7680 | 0.7664 | | 0.4818 | 9.66 | 2000 | 0.4900 | 0.7746 | 0.7731 | | 0.4833 | 10.63 | 2200 | 0.5128 | 0.7662 | 0.7649 | | 0.475 | 11.59 | 2400 | 0.4817 | 0.7797 | 0.7782 | | 0.4749 | 12.56 | 2600 | 0.4902 | 0.7734 | 0.7719 | | 0.4714 | 13.53 | 2800 | 0.4929 | 0.7719 | 0.7703 | | 0.4716 | 14.49 | 3000 | 0.4969 | 0.7681 | 0.7667 | | 0.4621 | 15.46 | 3200 | 0.4806 | 0.7789 | 0.7776 | | 0.468 | 16.43 | 3400 | 0.4978 | 0.7682 | 0.7667 | | 0.4639 | 17.39 | 3600 | 0.4814 | 0.7742 | 0.7728 | | 0.4581 | 18.36 | 3800 | 0.4766 | 0.7747 | 0.7740 | | 0.4579 | 19.32 | 4000 | 0.4889 | 0.7797 | 0.7782 | | 0.4541 | 20.29 | 4200 | 0.4879 | 0.7748 | 0.7734 | | 0.4542 | 21.26 | 4400 | 0.4880 | 0.7803 | 0.7788 | | 0.4502 | 22.22 | 4600 | 0.5049 | 0.7697 | 0.7682 | | 0.4505 | 23.19 | 4800 | 0.4831 | 0.7804 | 0.7791 | | 0.4495 | 24.15 | 5000 | 0.4909 | 0.7752 | 0.7737 | | 0.4462 | 25.12 | 5200 | 0.5006 | 0.7758 | 0.7743 | | 0.4447 | 26.09 | 5400 | 0.4841 | 0.7797 | 0.7782 | | 0.444 | 27.05 | 5600 | 0.5031 | 0.7679 | 0.7664 | | 0.4405 | 28.02 | 5800 | 0.4820 | 0.7794 | 0.7782 | | 0.4392 | 28.99 | 6000 | 0.4842 | 0.7752 | 0.7737 | | 0.4358 | 29.95 | 6200 | 0.4920 | 0.7743 | 0.7728 | | 0.4363 | 30.92 | 6400 | 0.4814 | 0.7765 | 0.7764 | | 0.4365 | 31.88 | 6600 | 0.4851 | 0.7767 | 0.7755 | | 0.4313 | 32.85 | 6800 | 0.4955 | 0.7722 | 0.7707 | | 0.4336 | 33.82 | 7000 | 0.4930 | 0.7719 | 0.7703 | | 0.4298 | 34.78 | 7200 | 0.5006 | 0.7707 | 0.7691 | | 0.4335 | 35.75 | 7400 | 0.4909 | 0.7772 | 0.7758 | | 0.425 | 36.71 | 7600 | 0.4977 | 0.7755 | 0.7740 | | 0.4289 | 37.68 | 7800 | 0.4993 | 0.7749 | 0.7734 | | 0.4263 | 38.65 | 8000 | 0.4875 | 0.7781 | 0.7767 | | 0.4337 | 39.61 | 8200 | 0.4826 | 0.7785 | 0.7773 | | 0.4211 | 40.58 | 8400 | 0.4989 | 0.7749 | 0.7734 | | 0.4228 | 41.55 | 8600 | 0.4944 | 0.7763 | 0.7749 | | 0.4203 | 42.51 | 8800 | 0.4904 | 0.7726 | 0.7713 | | 0.4256 | 43.48 | 9000 | 0.5018 | 0.7719 | 0.7703 | | 0.4211 | 44.44 | 9200 | 0.4971 | 0.7716 | 0.7700 | | 0.4259 | 45.41 | 9400 | 0.4937 | 0.7755 | 0.7740 | | 0.4148 | 46.38 | 9600 | 0.4993 | 0.7737 | 0.7722 | | 0.4243 | 47.34 | 9800 | 0.4932 | 0.7748 | 0.7734 | | 0.4207 | 48.31 | 10000 | 0.4919 | 0.7751 | 0.7737 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T02:44:04+00:00
text2text-generation
transformers
{}
WilliamStar/my_awesome_opus_books_model
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T02:44:29+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K14ac-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.4901 - F1 Score: 0.7690 - Accuracy: 0.7688 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5746 | 0.97 | 200 | 0.5157 | 0.7559 | 0.7543 | | 0.5174 | 1.93 | 400 | 0.4942 | 0.7703 | 0.7688 | | 0.5017 | 2.9 | 600 | 0.5152 | 0.7542 | 0.7531 | | 0.4951 | 3.86 | 800 | 0.4836 | 0.7753 | 0.7746 | | 0.4875 | 4.83 | 1000 | 0.5032 | 0.7625 | 0.7610 | | 0.4815 | 5.8 | 1200 | 0.4971 | 0.7676 | 0.7661 | | 0.4737 | 6.76 | 1400 | 0.4861 | 0.7728 | 0.7713 | | 0.4679 | 7.73 | 1600 | 0.4964 | 0.7659 | 0.7643 | | 0.463 | 8.7 | 1800 | 0.5063 | 0.7630 | 0.7616 | | 0.4547 | 9.66 | 2000 | 0.4940 | 0.7704 | 0.7688 | | 0.4548 | 10.63 | 2200 | 0.5152 | 0.7631 | 0.7616 | | 0.4445 | 11.59 | 2400 | 0.4812 | 0.7752 | 0.7740 | | 0.4386 | 12.56 | 2600 | 0.5055 | 0.7727 | 0.7713 | | 0.4306 | 13.53 | 2800 | 0.5064 | 0.7692 | 0.7676 | | 0.431 | 14.49 | 3000 | 0.4957 | 0.7712 | 0.7697 | | 0.415 | 15.46 | 3200 | 0.5090 | 0.7696 | 0.7685 | | 0.4186 | 16.43 | 3400 | 0.5115 | 0.7704 | 0.7688 | | 0.4127 | 17.39 | 3600 | 0.5006 | 0.7649 | 0.7637 | | 0.4009 | 18.36 | 3800 | 0.5020 | 0.7676 | 0.7673 | | 0.3987 | 19.32 | 4000 | 0.5140 | 0.7696 | 0.7688 | | 0.3925 | 20.29 | 4200 | 0.5288 | 0.7559 | 0.7555 | | 0.3916 | 21.26 | 4400 | 0.5305 | 0.7707 | 0.7691 | | 0.3798 | 22.22 | 4600 | 0.5455 | 0.7649 | 0.7634 | | 0.3791 | 23.19 | 4800 | 0.5304 | 0.7660 | 0.7652 | | 0.3712 | 24.15 | 5000 | 0.5595 | 0.7589 | 0.7573 | | 0.3705 | 25.12 | 5200 | 0.5477 | 0.7628 | 0.7613 | | 0.3614 | 26.09 | 5400 | 0.5390 | 0.7601 | 0.7589 | | 0.3579 | 27.05 | 5600 | 0.5539 | 0.7569 | 0.7552 | | 0.3532 | 28.02 | 5800 | 0.5317 | 0.7632 | 0.7622 | | 0.35 | 28.99 | 6000 | 0.5360 | 0.7572 | 0.7564 | | 0.3415 | 29.95 | 6200 | 0.5697 | 0.7554 | 0.7546 | | 0.3376 | 30.92 | 6400 | 0.5587 | 0.7627 | 0.7634 | | 0.3348 | 31.88 | 6600 | 0.5711 | 0.7533 | 0.7519 | | 0.3278 | 32.85 | 6800 | 0.5775 | 0.7595 | 0.7582 | | 0.3289 | 33.82 | 7000 | 0.5727 | 0.7613 | 0.7601 | | 0.3226 | 34.78 | 7200 | 0.5911 | 0.7624 | 0.7610 | | 0.3255 | 35.75 | 7400 | 0.5886 | 0.7499 | 0.7486 | | 0.3174 | 36.71 | 7600 | 0.5878 | 0.7625 | 0.7616 | | 0.3152 | 37.68 | 7800 | 0.6031 | 0.7584 | 0.7570 | | 0.3103 | 38.65 | 8000 | 0.5939 | 0.7590 | 0.7579 | | 0.3155 | 39.61 | 8200 | 0.5886 | 0.7575 | 0.7567 | | 0.3043 | 40.58 | 8400 | 0.6121 | 0.7554 | 0.7540 | | 0.302 | 41.55 | 8600 | 0.6296 | 0.7578 | 0.7564 | | 0.2983 | 42.51 | 8800 | 0.6113 | 0.7573 | 0.7564 | | 0.3019 | 43.48 | 9000 | 0.6239 | 0.7536 | 0.7522 | | 0.2971 | 44.44 | 9200 | 0.6205 | 0.7577 | 0.7564 | | 0.2991 | 45.41 | 9400 | 0.6207 | 0.7572 | 0.7558 | | 0.2918 | 46.38 | 9600 | 0.6238 | 0.7578 | 0.7564 | | 0.2998 | 47.34 | 9800 | 0.6179 | 0.7564 | 0.7552 | | 0.2939 | 48.31 | 10000 | 0.6184 | 0.7557 | 0.7546 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T02:45:47+00:00
null
null
{}
csmallzzz8/ChillLogo
null
[ "region:us" ]
null
2024-04-27T02:45:58+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me2-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset. It achieves the following results on the evaluation set: - Loss: 0.5958 - F1 Score: 0.6738 - Accuracy: 0.6755 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6591 | 1.04 | 200 | 0.6316 | 0.5954 | 0.6409 | | 0.6239 | 2.08 | 400 | 0.6231 | 0.6562 | 0.6556 | | 0.6154 | 3.12 | 600 | 0.6142 | 0.6453 | 0.6686 | | 0.6117 | 4.17 | 800 | 0.6111 | 0.6695 | 0.6722 | | 0.6093 | 5.21 | 1000 | 0.6122 | 0.6686 | 0.6689 | | 0.606 | 6.25 | 1200 | 0.6043 | 0.6673 | 0.6722 | | 0.5999 | 7.29 | 1400 | 0.6149 | 0.6710 | 0.6696 | | 0.6007 | 8.33 | 1600 | 0.6111 | 0.6744 | 0.6745 | | 0.6002 | 9.38 | 1800 | 0.6209 | 0.6636 | 0.6611 | | 0.5964 | 10.42 | 2000 | 0.6083 | 0.6740 | 0.6729 | | 0.5991 | 11.46 | 2200 | 0.5991 | 0.6736 | 0.6794 | | 0.5887 | 12.5 | 2400 | 0.6031 | 0.6823 | 0.6836 | | 0.5911 | 13.54 | 2600 | 0.6048 | 0.6779 | 0.6794 | | 0.5912 | 14.58 | 2800 | 0.6017 | 0.6779 | 0.6820 | | 0.5953 | 15.62 | 3000 | 0.6044 | 0.6795 | 0.6804 | | 0.5873 | 16.67 | 3200 | 0.6081 | 0.6732 | 0.6719 | | 0.5886 | 17.71 | 3400 | 0.6009 | 0.6798 | 0.6817 | | 0.5863 | 18.75 | 3600 | 0.5991 | 0.6805 | 0.6852 | | 0.5888 | 19.79 | 3800 | 0.6010 | 0.6764 | 0.6771 | | 0.5835 | 20.83 | 4000 | 0.6038 | 0.6748 | 0.6742 | | 0.5859 | 21.88 | 4200 | 0.6095 | 0.6776 | 0.6761 | | 0.5829 | 22.92 | 4400 | 0.6040 | 0.6800 | 0.6797 | | 0.5837 | 23.96 | 4600 | 0.6061 | 0.6761 | 0.6748 | | 0.5801 | 25.0 | 4800 | 0.6026 | 0.6804 | 0.6807 | | 0.58 | 26.04 | 5000 | 0.6049 | 0.6808 | 0.6804 | | 0.5785 | 27.08 | 5200 | 0.6068 | 0.6748 | 0.6735 | | 0.5794 | 28.12 | 5400 | 0.6059 | 0.6777 | 0.6768 | | 0.579 | 29.17 | 5600 | 0.6051 | 0.6790 | 0.6781 | | 0.5768 | 30.21 | 5800 | 0.6068 | 0.6793 | 0.6787 | | 0.5783 | 31.25 | 6000 | 0.6033 | 0.6803 | 0.6800 | | 0.5766 | 32.29 | 6200 | 0.5977 | 0.6847 | 0.6865 | | 0.5785 | 33.33 | 6400 | 0.6007 | 0.6833 | 0.6843 | | 0.5764 | 34.38 | 6600 | 0.5983 | 0.6807 | 0.6823 | | 0.5744 | 35.42 | 6800 | 0.6027 | 0.6804 | 0.6804 | | 0.5746 | 36.46 | 7000 | 0.5975 | 0.6859 | 0.6882 | | 0.5712 | 37.5 | 7200 | 0.6051 | 0.6773 | 0.6764 | | 0.5725 | 38.54 | 7400 | 0.6034 | 0.6820 | 0.6820 | | 0.5767 | 39.58 | 7600 | 0.5990 | 0.6821 | 0.6833 | | 0.5764 | 40.62 | 7800 | 0.6066 | 0.6774 | 0.6758 | | 0.5715 | 41.67 | 8000 | 0.6026 | 0.6822 | 0.6817 | | 0.5677 | 42.71 | 8200 | 0.5987 | 0.6823 | 0.6839 | | 0.5722 | 43.75 | 8400 | 0.6011 | 0.6791 | 0.6790 | | 0.5724 | 44.79 | 8600 | 0.6043 | 0.6802 | 0.6794 | | 0.5693 | 45.83 | 8800 | 0.6004 | 0.6787 | 0.6790 | | 0.5691 | 46.88 | 9000 | 0.6061 | 0.6801 | 0.6790 | | 0.5712 | 47.92 | 9200 | 0.6039 | 0.6823 | 0.6817 | | 0.57 | 48.96 | 9400 | 0.6020 | 0.6817 | 0.6817 | | 0.5704 | 50.0 | 9600 | 0.6028 | 0.6806 | 0.6804 | | 0.5707 | 51.04 | 9800 | 0.6033 | 0.6824 | 0.6820 | | 0.5682 | 52.08 | 10000 | 0.6036 | 0.6815 | 0.6810 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T02:46:12+00:00
text2text-generation
transformers
### Using .generate() ```python from transformers import GenerationConfig, T5ForConditionalGeneration, T5Tokenizer model_name = "cu-kairos/propbank_srl_seq2seq_t5_small" model = T5ForConditionalGeneration.from_pretrained(model_name) tokenizer = T5Tokenizer.from_pretrained(model_name) generation_config = GenerationConfig.from_pretrained(model_name) tokenized_inputs = tokenizer(["SRL for [put]: That fund was [put] together by Blackstone Group ."], return_tensors="pt") outputs = model.generate(**tokenized_inputs, generation_config=generation_config) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) # ['ARG-1: That fund | ARG-2: together | ARG-0: by Blackstone Group '] ``` ### Using pipeline ```python from transformers import pipeline srl = pipeline("cu-kairos/propbank_srl_seq2seq_t5_small") print(srl(["SRL for [put]: That fund was [put] together by Blackstone Group ."])) # [{'generated_text': 'ARG-1: That fund | ARG-2: together | ARG-0: by Blackstone Group '}] ```
{"license": "apache-2.0"}
cu-kairos/propbank_srl_seq2seq_t5_small
null
[ "transformers", "safetensors", "t5", "text2text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T02:46:48+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
elliotthwangmsa/KimLanpure-phi-3-zh
null
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T02:47:26+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
zandfj/LLaMA2-7B-Chat-dpo-zf-042701-moren
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T02:47:45+00:00
null
null
{}
sleepyraygun/CrispyC2
null
[ "region:us" ]
null
2024-04-27T02:48:09+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2156 - Accuracy: 0.923 - F1: 0.9226 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3117 | 0.9065 | 0.9055 | | No log | 2.0 | 500 | 0.2156 | 0.923 | 0.9226 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.2.2 - Datasets 2.12.0 - Tokenizers 0.13.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.923, "name": "Accuracy"}, {"type": "f1", "value": 0.9225647553629688, "name": "F1"}]}]}]}
VuaCoBac/distilbert-base-uncased-finetuned-emotion
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T02:48:38+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-410m_mz-132_EnronSpam_n-its-10 This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-410m", "model-index": [{"name": "robust_llm_pythia-410m_mz-132_EnronSpam_n-its-10", "results": []}]}
AlignmentResearch/robust_llm_pythia-410m_mz-132_EnronSpam_n-its-10
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-410m", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T02:50:34+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1 This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
{"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "HuggingFaceH4/zephyr-7b-beta", "model-index": [{"name": "0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1", "results": []}]}
ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:HuggingFaceH4/zephyr-7b-beta", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T02:50:50+00:00
null
null
{}
hoa-quickloop/tryon-controlnet-1.4
null
[ "region:us" ]
null
2024-04-27T02:51:32+00:00
null
null
--- license: apache-2.0 --- # Mobius RWKV r5 chat 12B 8k Mobius is a RWKV v5.2 arch chat model, benifit from [Matrix-Valued States and Dynamic Recurrence](https://arxiv.org/abs/2404.05892) ## Introduction Mobius is a RWKV v5.2 arch model, a state based RNN+CNN+Transformer Mixed language model pretrained on a certain amount of data. In comparison with the previous released Mobius, the improvements include: * Only 24G Vram to run this model locally with fp16; * Significant performance improvement; * Multilingual support ; * Stable support of 128K context length. * Base model [Mobius-mega-12B-128k-base](https://huggingface.co/TimeMobius/Moibus-mega-12B-128k-base) ## Usage We encourage you use few shots to use this model, Desipte Directly use User: xxxx\n\nAssistant: xxx\n\n is really good too, Can boost all potential ability. Recommend Temp and topp: 0.7 0.6/1 0.3/1.5 0.3/0.2 0.8 ## More details Mobius 12B 128k based on RWKV v5.2 arch, which is leading state based RNN+CNN+Transformer Mixed large language model which focus opensouce community * 10~100 trainning/inference cost reduce; * state based,selected memory, which mean good at grok; * community support. ## requirements 24G vram to run fp16, 12G for int8, 6G for nf4 with Ai00 server. * [RWKV Runner](https://github.com/josStorer/RWKV-Runner) * [Ai00 server](https://github.com/cgisky1980/ai00_rwkv_server) ## future plan If you need a HF version let us know [Mobius-Chat-12B-128k](https://huggingface.co/TimeMobius/Mobius-Chat-12B-128k)
{"license": "apache-2.0"}
TimeMobius/Mobius-RWKV-r5-chat-12B-8k
null
[ "arxiv:2404.05892", "license:apache-2.0", "region:us" ]
null
2024-04-27T02:54:25+00:00
text-generation
transformers
# Description 4-bit AWQ-quantized version of [stylellm/ShuiHuZhuan-6b](https://huggingface.co/stylellm/ShuiHuZhuan-6b)
{"license": "other", "license_name": "yi-license", "license_link": "https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE"}
stylellm/ShuiHuZhuan-6b-AWQ
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-27T02:55:51+00:00
null
null
{}
WALIDALI/bekiiippony__bekiksritly
null
[ "region:us" ]
null
2024-04-27T02:55:54+00:00
text-generation
transformers
# Description 4-bit AWQ-quantized version of [stylellm/XiYouJi-6b](https://huggingface.co/stylellm/XiYouJi-6b)
{"license": "other", "license_name": "yi-license", "license_link": "https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE"}
stylellm/XiYouJi-6b-AWQ
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-27T02:57:06+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speech_ocean_wav2vec_mdd This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3663 - Wer: 0.0863 - Cer: 0.0692 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-------:|:----:|:---------------:|:------:|:------:| | 45.149 | 0.9873 | 39 | 45.0584 | 1.0258 | 0.7932 | | 40.7325 | 2.0 | 79 | 32.0660 | 1.0 | 1.0 | | 14.8164 | 2.9873 | 118 | 8.1694 | 1.0 | 1.0 | | 5.6535 | 4.0 | 158 | 4.5922 | 1.0 | 1.0 | | 3.9508 | 4.9873 | 197 | 3.8581 | 1.0 | 1.0 | | 3.8065 | 6.0 | 237 | 3.7907 | 1.0 | 1.0 | | 3.766 | 6.9873 | 276 | 3.7769 | 1.0 | 1.0 | | 3.7552 | 8.0 | 316 | 3.7465 | 1.0 | 1.0 | | 3.7489 | 8.9873 | 355 | 3.7611 | 1.0 | 1.0 | | 3.7263 | 10.0 | 395 | 3.7234 | 1.0 | 1.0 | | 3.7343 | 10.9873 | 434 | 3.6934 | 1.0 | 1.0 | | 3.6327 | 12.0 | 474 | 3.4204 | 1.0 | 1.0 | | 3.1861 | 12.9873 | 513 | 2.7907 | 0.9710 | 0.9864 | | 2.2814 | 14.0 | 553 | 1.7142 | 0.5088 | 0.5401 | | 1.6854 | 14.9873 | 592 | 1.0573 | 0.2488 | 0.1914 | | 1.2968 | 16.0 | 632 | 0.7282 | 0.1786 | 0.1391 | | 0.8626 | 16.9873 | 671 | 0.5435 | 0.1305 | 0.0999 | | 0.7852 | 18.0 | 711 | 0.4440 | 0.1046 | 0.0831 | | 0.6332 | 18.9873 | 750 | 0.3847 | 0.0936 | 0.0748 | | 0.6518 | 19.7468 | 780 | 0.3663 | 0.0863 | 0.0692 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "facebook/wav2vec2-large-xlsr-53", "model-index": [{"name": "speech_ocean_wav2vec_mdd", "results": []}]}
nrshoudi/speech_ocean_wav2vec_mdd
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-large-xlsr-53", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:03:12+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me2-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset. It achieves the following results on the evaluation set: - Loss: 0.5987 - F1 Score: 0.6689 - Accuracy: 0.6712 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6507 | 1.04 | 200 | 0.6235 | 0.6298 | 0.6572 | | 0.6163 | 2.08 | 400 | 0.6337 | 0.6501 | 0.6474 | | 0.6088 | 3.12 | 600 | 0.6120 | 0.6605 | 0.6696 | | 0.6058 | 4.17 | 800 | 0.6387 | 0.6452 | 0.6426 | | 0.6026 | 5.21 | 1000 | 0.6129 | 0.6650 | 0.6699 | | 0.5971 | 6.25 | 1200 | 0.6107 | 0.6691 | 0.6758 | | 0.5906 | 7.29 | 1400 | 0.6096 | 0.6723 | 0.6732 | | 0.5903 | 8.33 | 1600 | 0.6159 | 0.6679 | 0.6683 | | 0.5856 | 9.38 | 1800 | 0.6257 | 0.6653 | 0.6628 | | 0.5813 | 10.42 | 2000 | 0.6059 | 0.6739 | 0.6807 | | 0.5826 | 11.46 | 2200 | 0.6015 | 0.6749 | 0.6804 | | 0.5693 | 12.5 | 2400 | 0.6119 | 0.6757 | 0.6768 | | 0.5717 | 13.54 | 2600 | 0.6076 | 0.6825 | 0.6849 | | 0.5682 | 14.58 | 2800 | 0.6147 | 0.6771 | 0.6810 | | 0.5733 | 15.62 | 3000 | 0.6180 | 0.6786 | 0.6797 | | 0.5631 | 16.67 | 3200 | 0.6091 | 0.6741 | 0.6777 | | 0.5629 | 17.71 | 3400 | 0.6161 | 0.6737 | 0.6738 | | 0.5585 | 18.75 | 3600 | 0.6159 | 0.6766 | 0.6781 | | 0.5583 | 19.79 | 3800 | 0.6155 | 0.6754 | 0.6761 | | 0.5534 | 20.83 | 4000 | 0.6086 | 0.6744 | 0.6777 | | 0.5526 | 21.88 | 4200 | 0.6331 | 0.6719 | 0.6699 | | 0.5494 | 22.92 | 4400 | 0.6340 | 0.6584 | 0.6562 | | 0.548 | 23.96 | 4600 | 0.6266 | 0.6708 | 0.6689 | | 0.5434 | 25.0 | 4800 | 0.6296 | 0.6724 | 0.6719 | | 0.5406 | 26.04 | 5000 | 0.6316 | 0.6725 | 0.6719 | | 0.5386 | 27.08 | 5200 | 0.6341 | 0.6677 | 0.6654 | | 0.5379 | 28.12 | 5400 | 0.6361 | 0.6615 | 0.6592 | | 0.5376 | 29.17 | 5600 | 0.6392 | 0.6692 | 0.6673 | | 0.5324 | 30.21 | 5800 | 0.6367 | 0.6721 | 0.6719 | | 0.5318 | 31.25 | 6000 | 0.6522 | 0.6627 | 0.6601 | | 0.5309 | 32.29 | 6200 | 0.6281 | 0.6727 | 0.6735 | | 0.5312 | 33.33 | 6400 | 0.6496 | 0.6649 | 0.6628 | | 0.5269 | 34.38 | 6600 | 0.6352 | 0.6730 | 0.6732 | | 0.5276 | 35.42 | 6800 | 0.6384 | 0.6666 | 0.6654 | | 0.5215 | 36.46 | 7000 | 0.6376 | 0.6667 | 0.6657 | | 0.5187 | 37.5 | 7200 | 0.6477 | 0.6651 | 0.6634 | | 0.5203 | 38.54 | 7400 | 0.6438 | 0.6674 | 0.6660 | | 0.5204 | 39.58 | 7600 | 0.6374 | 0.6764 | 0.6774 | | 0.5214 | 40.62 | 7800 | 0.6509 | 0.6601 | 0.6579 | | 0.5147 | 41.67 | 8000 | 0.6436 | 0.6632 | 0.6618 | | 0.5101 | 42.71 | 8200 | 0.6480 | 0.6678 | 0.6667 | | 0.5118 | 43.75 | 8400 | 0.6471 | 0.6627 | 0.6608 | | 0.5142 | 44.79 | 8600 | 0.6467 | 0.6651 | 0.6637 | | 0.5101 | 45.83 | 8800 | 0.6443 | 0.6689 | 0.6680 | | 0.5095 | 46.88 | 9000 | 0.6576 | 0.6597 | 0.6572 | | 0.5116 | 47.92 | 9200 | 0.6527 | 0.6672 | 0.6650 | | 0.5075 | 48.96 | 9400 | 0.6515 | 0.6657 | 0.6641 | | 0.5094 | 50.0 | 9600 | 0.6544 | 0.6641 | 0.6621 | | 0.5094 | 51.04 | 9800 | 0.6532 | 0.6641 | 0.6621 | | 0.5084 | 52.08 | 10000 | 0.6549 | 0.6658 | 0.6637 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T03:04:03+00:00
text-classification
transformers
{}
WilliamStar/sequence_classification_model
null
[ "transformers", "pytorch", "tensorboard", "squeezebert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:04:53+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # blue_model This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3527 - F1: 0.9217 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3136 | 1.0 | 1250 | 0.5730 | 0.8487 | | 0.1427 | 2.0 | 2500 | 0.4297 | 0.8980 | | 0.032 | 3.0 | 3750 | 0.3527 | 0.9217 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "bert-base-cased", "model-index": [{"name": "blue_model", "results": []}]}
TazCaldwell/blue_model
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:05:27+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon-rw-1b-code-gen-llm-task2 This model is a fine-tuned version of [petals-team/falcon-rw-1b](https://huggingface.co/petals-team/falcon-rw-1b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 320 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6083 | 0.2 | 40 | 1.5153 | | 1.4854 | 0.4 | 80 | 1.3644 | | 1.3717 | 0.6 | 120 | 1.2477 | | 1.244 | 0.8 | 160 | 1.2093 | | 1.2581 | 1.0 | 200 | 1.1897 | | 1.1757 | 1.2 | 240 | 1.1816 | | 1.2085 | 1.4 | 280 | 1.1787 | | 1.1808 | 1.6 | 320 | 1.1783 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "petals-team/falcon-rw-1b", "model-index": [{"name": "falcon-rw-1b-code-gen-llm-task2", "results": []}]}
Katochh/falcon-rw-1b-code-gen-llm-task2
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:petals-team/falcon-rw-1b", "license:apache-2.0", "region:us" ]
null
2024-04-27T03:06:16+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me2-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset. It achieves the following results on the evaluation set: - Loss: 0.6041 - F1 Score: 0.6807 - Accuracy: 0.6804 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6457 | 1.04 | 200 | 0.6181 | 0.6382 | 0.6680 | | 0.6127 | 2.08 | 400 | 0.6378 | 0.6446 | 0.6422 | | 0.6013 | 3.12 | 600 | 0.6017 | 0.6766 | 0.6781 | | 0.5961 | 4.17 | 800 | 0.6127 | 0.6739 | 0.6722 | | 0.588 | 5.21 | 1000 | 0.6058 | 0.6801 | 0.6810 | | 0.5822 | 6.25 | 1200 | 0.6042 | 0.6709 | 0.6693 | | 0.5717 | 7.29 | 1400 | 0.5956 | 0.6866 | 0.6898 | | 0.5663 | 8.33 | 1600 | 0.6086 | 0.6866 | 0.6862 | | 0.5612 | 9.38 | 1800 | 0.6295 | 0.6614 | 0.6592 | | 0.5506 | 10.42 | 2000 | 0.6046 | 0.6740 | 0.6764 | | 0.5482 | 11.46 | 2200 | 0.6004 | 0.6845 | 0.6872 | | 0.5316 | 12.5 | 2400 | 0.6010 | 0.6865 | 0.6869 | | 0.5274 | 13.54 | 2600 | 0.6310 | 0.6798 | 0.6777 | | 0.5205 | 14.58 | 2800 | 0.6221 | 0.6798 | 0.6797 | | 0.518 | 15.62 | 3000 | 0.6521 | 0.6711 | 0.6686 | | 0.5022 | 16.67 | 3200 | 0.6426 | 0.6751 | 0.6729 | | 0.4934 | 17.71 | 3400 | 0.6603 | 0.6669 | 0.6644 | | 0.4846 | 18.75 | 3600 | 0.6574 | 0.6803 | 0.6790 | | 0.4814 | 19.79 | 3800 | 0.6547 | 0.6806 | 0.6784 | | 0.4681 | 20.83 | 4000 | 0.6634 | 0.6783 | 0.6761 | | 0.4654 | 21.88 | 4200 | 0.6988 | 0.6739 | 0.6716 | | 0.4593 | 22.92 | 4400 | 0.7006 | 0.6723 | 0.6699 | | 0.4447 | 23.96 | 4600 | 0.6885 | 0.6701 | 0.6676 | | 0.442 | 25.0 | 4800 | 0.7219 | 0.6584 | 0.6562 | | 0.4321 | 26.04 | 5000 | 0.7074 | 0.6746 | 0.6725 | | 0.4253 | 27.08 | 5200 | 0.7410 | 0.6664 | 0.6644 | | 0.421 | 28.12 | 5400 | 0.7354 | 0.6665 | 0.6641 | | 0.413 | 29.17 | 5600 | 0.7220 | 0.6772 | 0.6755 | | 0.403 | 30.21 | 5800 | 0.7803 | 0.6734 | 0.6709 | | 0.4008 | 31.25 | 6000 | 0.7683 | 0.6816 | 0.6794 | | 0.3923 | 32.29 | 6200 | 0.7666 | 0.6714 | 0.6689 | | 0.3928 | 33.33 | 6400 | 0.7627 | 0.6825 | 0.6804 | | 0.3826 | 34.38 | 6600 | 0.7727 | 0.6816 | 0.6804 | | 0.3825 | 35.42 | 6800 | 0.7577 | 0.6845 | 0.6823 | | 0.3737 | 36.46 | 7000 | 0.7840 | 0.6772 | 0.6748 | | 0.3737 | 37.5 | 7200 | 0.7641 | 0.6802 | 0.6781 | | 0.3696 | 38.54 | 7400 | 0.7842 | 0.6822 | 0.6800 | | 0.3644 | 39.58 | 7600 | 0.7746 | 0.6836 | 0.6820 | | 0.3611 | 40.62 | 7800 | 0.8042 | 0.6772 | 0.6748 | | 0.3527 | 41.67 | 8000 | 0.8161 | 0.6755 | 0.6732 | | 0.3457 | 42.71 | 8200 | 0.8149 | 0.6791 | 0.6771 | | 0.3512 | 43.75 | 8400 | 0.8125 | 0.6756 | 0.6732 | | 0.3513 | 44.79 | 8600 | 0.8198 | 0.6714 | 0.6689 | | 0.3399 | 45.83 | 8800 | 0.8281 | 0.6813 | 0.6790 | | 0.3407 | 46.88 | 9000 | 0.8229 | 0.6788 | 0.6764 | | 0.3407 | 47.92 | 9200 | 0.8400 | 0.6769 | 0.6745 | | 0.3342 | 48.96 | 9400 | 0.8383 | 0.6797 | 0.6774 | | 0.3355 | 50.0 | 9600 | 0.8366 | 0.6778 | 0.6755 | | 0.3338 | 51.04 | 9800 | 0.8430 | 0.6817 | 0.6794 | | 0.3327 | 52.08 | 10000 | 0.8487 | 0.6814 | 0.6790 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T03:09:17+00:00
object-detection
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session8 This model is a fine-tuned version of [nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session7](https://huggingface.co/nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session7) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 300 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.0.1 - Datasets 2.18.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "base_model": "nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session7", "model-index": [{"name": "detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session8", "results": []}]}
nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session8
null
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session7", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:10:59+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K9ac-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.4857 - F1 Score: 0.7690 - Accuracy: 0.7686 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6105 | 1.15 | 200 | 0.5704 | 0.7121 | 0.7118 | | 0.5517 | 2.3 | 400 | 0.6157 | 0.6694 | 0.6779 | | 0.5265 | 3.45 | 600 | 0.5768 | 0.7038 | 0.7074 | | 0.5197 | 4.6 | 800 | 0.5693 | 0.7145 | 0.7172 | | 0.5123 | 5.75 | 1000 | 0.5369 | 0.7354 | 0.7352 | | 0.5059 | 6.9 | 1200 | 0.5433 | 0.7396 | 0.7395 | | 0.5013 | 8.05 | 1400 | 0.5393 | 0.7381 | 0.7380 | | 0.5019 | 9.2 | 1600 | 0.5736 | 0.7145 | 0.7179 | | 0.4956 | 10.34 | 1800 | 0.5302 | 0.7427 | 0.7424 | | 0.4964 | 11.49 | 2000 | 0.5296 | 0.7425 | 0.7424 | | 0.4879 | 12.64 | 2200 | 0.5755 | 0.7235 | 0.7265 | | 0.4909 | 13.79 | 2400 | 0.5323 | 0.7410 | 0.7413 | | 0.4862 | 14.94 | 2600 | 0.5214 | 0.7450 | 0.7449 | | 0.4847 | 16.09 | 2800 | 0.5236 | 0.7532 | 0.7531 | | 0.4831 | 17.24 | 3000 | 0.5322 | 0.7455 | 0.7456 | | 0.4791 | 18.39 | 3200 | 0.5421 | 0.7383 | 0.7391 | | 0.4831 | 19.54 | 3400 | 0.5213 | 0.7479 | 0.7481 | | 0.4759 | 20.69 | 3600 | 0.5204 | 0.7502 | 0.7499 | | 0.4773 | 21.84 | 3800 | 0.5315 | 0.7355 | 0.7370 | | 0.4715 | 22.99 | 4000 | 0.5248 | 0.7465 | 0.7470 | | 0.4762 | 24.14 | 4200 | 0.5046 | 0.7544 | 0.7539 | | 0.4647 | 25.29 | 4400 | 0.5273 | 0.7485 | 0.7485 | | 0.4735 | 26.44 | 4600 | 0.5185 | 0.7506 | 0.7506 | | 0.4682 | 27.59 | 4800 | 0.5320 | 0.7436 | 0.7445 | | 0.4669 | 28.74 | 5000 | 0.5183 | 0.7506 | 0.7510 | | 0.4703 | 29.89 | 5200 | 0.5236 | 0.7516 | 0.7517 | | 0.4657 | 31.03 | 5400 | 0.5227 | 0.7485 | 0.7488 | | 0.4666 | 32.18 | 5600 | 0.5091 | 0.7567 | 0.7564 | | 0.4586 | 33.33 | 5800 | 0.5142 | 0.7546 | 0.7542 | | 0.4677 | 34.48 | 6000 | 0.5176 | 0.7511 | 0.7513 | | 0.4587 | 35.63 | 6200 | 0.5129 | 0.7534 | 0.7531 | | 0.4624 | 36.78 | 6400 | 0.5180 | 0.7514 | 0.7517 | | 0.4599 | 37.93 | 6600 | 0.5267 | 0.7485 | 0.7488 | | 0.461 | 39.08 | 6800 | 0.5112 | 0.7532 | 0.7531 | | 0.4586 | 40.23 | 7000 | 0.5133 | 0.7532 | 0.7531 | | 0.4601 | 41.38 | 7200 | 0.5209 | 0.7500 | 0.7503 | | 0.4588 | 42.53 | 7400 | 0.5120 | 0.7525 | 0.7524 | | 0.4574 | 43.68 | 7600 | 0.5223 | 0.7465 | 0.7470 | | 0.4576 | 44.83 | 7800 | 0.5229 | 0.7479 | 0.7485 | | 0.4575 | 45.98 | 8000 | 0.5164 | 0.7502 | 0.7503 | | 0.4572 | 47.13 | 8200 | 0.5219 | 0.7480 | 0.7485 | | 0.4537 | 48.28 | 8400 | 0.5148 | 0.7521 | 0.7521 | | 0.4542 | 49.43 | 8600 | 0.5129 | 0.7540 | 0.7539 | | 0.4548 | 50.57 | 8800 | 0.5191 | 0.7505 | 0.7506 | | 0.4561 | 51.72 | 9000 | 0.5211 | 0.7488 | 0.7492 | | 0.4512 | 52.87 | 9200 | 0.5229 | 0.7495 | 0.7499 | | 0.457 | 54.02 | 9400 | 0.5188 | 0.7489 | 0.7492 | | 0.4543 | 55.17 | 9600 | 0.5228 | 0.7490 | 0.7496 | | 0.4515 | 56.32 | 9800 | 0.5153 | 0.7531 | 0.7531 | | 0.4579 | 57.47 | 10000 | 0.5171 | 0.7512 | 0.7513 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T03:11:11+00:00
text-generation
transformers
{}
UsamaCoder/finetunedLlama-python-B
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T03:11:50+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
zandfj/LLaMA2-7B-Chat-dpo-zf-042710-moren-maybecf
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:17:06+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K9ac-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.4725 - F1 Score: 0.7882 - Accuracy: 0.7877 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5872 | 1.15 | 200 | 0.5828 | 0.7099 | 0.7118 | | 0.5275 | 2.3 | 400 | 0.5996 | 0.6879 | 0.6959 | | 0.5012 | 3.45 | 600 | 0.5458 | 0.7278 | 0.7294 | | 0.494 | 4.6 | 800 | 0.5236 | 0.7389 | 0.7395 | | 0.4861 | 5.75 | 1000 | 0.5131 | 0.7522 | 0.7521 | | 0.4786 | 6.9 | 1200 | 0.5109 | 0.7520 | 0.7517 | | 0.4735 | 8.05 | 1400 | 0.5289 | 0.7457 | 0.7460 | | 0.4699 | 9.2 | 1600 | 0.5366 | 0.7307 | 0.7334 | | 0.4656 | 10.34 | 1800 | 0.5022 | 0.7571 | 0.7567 | | 0.4624 | 11.49 | 2000 | 0.5082 | 0.7500 | 0.7499 | | 0.4539 | 12.64 | 2200 | 0.5246 | 0.7475 | 0.7481 | | 0.4532 | 13.79 | 2400 | 0.5058 | 0.7616 | 0.7614 | | 0.4484 | 14.94 | 2600 | 0.4923 | 0.7615 | 0.7611 | | 0.4464 | 16.09 | 2800 | 0.5202 | 0.7580 | 0.7585 | | 0.4427 | 17.24 | 3000 | 0.5187 | 0.7616 | 0.7618 | | 0.441 | 18.39 | 3200 | 0.5107 | 0.7643 | 0.7643 | | 0.4411 | 19.54 | 3400 | 0.4989 | 0.7623 | 0.7621 | | 0.4317 | 20.69 | 3600 | 0.5000 | 0.7755 | 0.7751 | | 0.432 | 21.84 | 3800 | 0.5128 | 0.7620 | 0.7621 | | 0.4255 | 22.99 | 4000 | 0.5228 | 0.7568 | 0.7575 | | 0.4291 | 24.14 | 4200 | 0.4951 | 0.7673 | 0.7668 | | 0.416 | 25.29 | 4400 | 0.5074 | 0.7654 | 0.7650 | | 0.4224 | 26.44 | 4600 | 0.5063 | 0.7691 | 0.7686 | | 0.4215 | 27.59 | 4800 | 0.5098 | 0.7656 | 0.7654 | | 0.4145 | 28.74 | 5000 | 0.5032 | 0.7645 | 0.7643 | | 0.4178 | 29.89 | 5200 | 0.5065 | 0.7691 | 0.7686 | | 0.412 | 31.03 | 5400 | 0.5218 | 0.7599 | 0.7600 | | 0.41 | 32.18 | 5600 | 0.5066 | 0.7698 | 0.7693 | | 0.4034 | 33.33 | 5800 | 0.5072 | 0.7709 | 0.7704 | | 0.4083 | 34.48 | 6000 | 0.5014 | 0.7673 | 0.7668 | | 0.4009 | 35.63 | 6200 | 0.5110 | 0.7666 | 0.7661 | | 0.4009 | 36.78 | 6400 | 0.5065 | 0.7626 | 0.7621 | | 0.4013 | 37.93 | 6600 | 0.5248 | 0.7629 | 0.7625 | | 0.3998 | 39.08 | 6800 | 0.5121 | 0.7615 | 0.7611 | | 0.397 | 40.23 | 7000 | 0.5241 | 0.7625 | 0.7621 | | 0.3973 | 41.38 | 7200 | 0.5170 | 0.7608 | 0.7603 | | 0.3942 | 42.53 | 7400 | 0.5102 | 0.7658 | 0.7654 | | 0.3913 | 43.68 | 7600 | 0.5165 | 0.7644 | 0.7639 | | 0.3918 | 44.83 | 7800 | 0.5233 | 0.7621 | 0.7618 | | 0.3916 | 45.98 | 8000 | 0.5160 | 0.7684 | 0.7679 | | 0.3883 | 47.13 | 8200 | 0.5268 | 0.7643 | 0.7639 | | 0.3857 | 48.28 | 8400 | 0.5265 | 0.7633 | 0.7629 | | 0.3841 | 49.43 | 8600 | 0.5217 | 0.7626 | 0.7621 | | 0.3858 | 50.57 | 8800 | 0.5269 | 0.7579 | 0.7575 | | 0.3862 | 51.72 | 9000 | 0.5219 | 0.7651 | 0.7647 | | 0.3793 | 52.87 | 9200 | 0.5349 | 0.7618 | 0.7614 | | 0.3875 | 54.02 | 9400 | 0.5238 | 0.7651 | 0.7647 | | 0.3832 | 55.17 | 9600 | 0.5286 | 0.7626 | 0.7621 | | 0.3805 | 56.32 | 9800 | 0.5197 | 0.7662 | 0.7657 | | 0.3856 | 57.47 | 10000 | 0.5209 | 0.7659 | 0.7654 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T03:17:28+00:00
text-generation
transformers
Quantizations of https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha # From original readme ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> **Important: Please use the exact chat template provided below for the model. Otherwise there will be a degrade in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.** Our model follows the exact chat template and usage as [Openchat 3.5](https://huggingface.co/openchat/openchat_3.5). Please refer to their model card for more details. In addition, our model is hosted on LMSYS [Chatbot Arena](https://chat.lmsys.org) for free test. The conversation template is the same as Openchat 3.5: ``` import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat_3.5") # Single-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Multi-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Coding Mode tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747] ``` ## Code Examples ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("berkeley-nest/Starling-LM-7B-alpha") model = transformers.AutoModelForCausalLM.from_pretrained("berkeley-nest/Starling-LM-7B-alpha") def generate_response(prompt): input_ids = tokenizer(prompt, return_tensors="pt").input_ids outputs = model.generate( input_ids, max_length=256, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id, ) response_ids = outputs[0] response_text = tokenizer.decode(response_ids, skip_special_tokens=True) return response_text # Single-turn conversation prompt = "Hello, how are you?" single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:" response_text = generate_response(single_turn_prompt) print("Response:", response_text) ## Multi-turn conversation prompt = "Hello" follow_up_question = "How are you today?" response = "" multi_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: {response}<|end_of_turn|>GPT4 Correct User: {follow_up_question}<|end_of_turn|>GPT4 Correct Assistant:" response_text = generate_response(multi_turn_prompt) print("Multi-turn conversation response:", response_text) ### Coding conversation prompt = "Implement quicksort using C++" coding_prompt = f"Code User: {prompt}<|end_of_turn|>Code Assistant:" response = generate_response(coding_prompt) print("Coding conversation response:", response) ```
{"language": ["en"], "license": "other", "tags": ["transformers", "gguf", "imatrix", "Starling-LM-7B-alpha"], "pipeline_tag": "text-generation", "inference": false}
duyntnet/Starling-LM-7B-alpha-imatrix-GGUF
null
[ "transformers", "gguf", "imatrix", "Starling-LM-7B-alpha", "text-generation", "en", "license:other", "region:us" ]
null
2024-04-27T03:18:46+00:00
text-to-image
diffusers
{"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "DC comics style"}
ZachX/comics_SDXL_lora
null
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
null
2024-04-27T03:19:39+00:00
null
null
{}
SubHumanZZ/Anime-Enhancer-XL
null
[ "region:us" ]
null
2024-04-27T03:19:52+00:00
text-generation
transformers
# Uploaded model - **Developed by:** liminerity - **License:** apache-2.0 - **Finetuned from model :** Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1"}
liminerity/llama-3-8b-silent-star
null
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:19:55+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-lora-64-no-quant-2k This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 32 - total_train_batch_size: 256 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo"], "datasets": ["updated", "original"], "base_model": "alignment-handbook/zephyr-7b-sft-full", "model-index": [{"name": "zephyr-7b-lora-64-no-quant-2k", "results": []}]}
YYYYYYibo/zephyr-7b-lora-64-no-quant-2k
null
[ "peft", "tensorboard", "safetensors", "mistral", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "dataset:updated", "dataset:original", "base_model:alignment-handbook/zephyr-7b-sft-full", "license:apache-2.0", "region:us" ]
null
2024-04-27T03:20:21+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K9ac-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.4638 - F1 Score: 0.7886 - Accuracy: 0.7881 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5686 | 1.15 | 200 | 0.5615 | 0.7215 | 0.7233 | | 0.5079 | 2.3 | 400 | 0.5688 | 0.7010 | 0.7071 | | 0.4874 | 3.45 | 600 | 0.5293 | 0.7366 | 0.7377 | | 0.4783 | 4.6 | 800 | 0.5067 | 0.7505 | 0.7506 | | 0.471 | 5.75 | 1000 | 0.5038 | 0.7570 | 0.7567 | | 0.4593 | 6.9 | 1200 | 0.5000 | 0.7680 | 0.7675 | | 0.451 | 8.05 | 1400 | 0.5091 | 0.7601 | 0.7596 | | 0.4445 | 9.2 | 1600 | 0.5151 | 0.7528 | 0.7531 | | 0.4361 | 10.34 | 1800 | 0.5131 | 0.7579 | 0.7575 | | 0.4336 | 11.49 | 2000 | 0.5120 | 0.7658 | 0.7654 | | 0.4209 | 12.64 | 2200 | 0.5051 | 0.7592 | 0.7589 | | 0.4155 | 13.79 | 2400 | 0.5164 | 0.7554 | 0.7553 | | 0.4101 | 14.94 | 2600 | 0.4929 | 0.7690 | 0.7686 | | 0.4023 | 16.09 | 2800 | 0.5523 | 0.7449 | 0.7460 | | 0.3963 | 17.24 | 3000 | 0.5205 | 0.7690 | 0.7686 | | 0.3893 | 18.39 | 3200 | 0.5240 | 0.7604 | 0.7600 | | 0.3857 | 19.54 | 3400 | 0.5227 | 0.7608 | 0.7603 | | 0.3733 | 20.69 | 3600 | 0.5274 | 0.7668 | 0.7665 | | 0.3671 | 21.84 | 3800 | 0.5369 | 0.7570 | 0.7567 | | 0.3584 | 22.99 | 4000 | 0.5472 | 0.7583 | 0.7582 | | 0.3573 | 24.14 | 4200 | 0.5395 | 0.7627 | 0.7625 | | 0.3427 | 25.29 | 4400 | 0.5633 | 0.7579 | 0.7575 | | 0.3432 | 26.44 | 4600 | 0.5609 | 0.7630 | 0.7625 | | 0.34 | 27.59 | 4800 | 0.5436 | 0.7630 | 0.7625 | | 0.3268 | 28.74 | 5000 | 0.5575 | 0.7583 | 0.7578 | | 0.3327 | 29.89 | 5200 | 0.5748 | 0.7576 | 0.7571 | | 0.3184 | 31.03 | 5400 | 0.6080 | 0.7481 | 0.7485 | | 0.3124 | 32.18 | 5600 | 0.6024 | 0.7576 | 0.7571 | | 0.3023 | 33.33 | 5800 | 0.5905 | 0.7619 | 0.7614 | | 0.3034 | 34.48 | 6000 | 0.5878 | 0.7565 | 0.7560 | | 0.296 | 35.63 | 6200 | 0.6280 | 0.7581 | 0.7578 | | 0.2959 | 36.78 | 6400 | 0.5909 | 0.7576 | 0.7571 | | 0.2882 | 37.93 | 6600 | 0.6093 | 0.7601 | 0.7596 | | 0.2842 | 39.08 | 6800 | 0.6144 | 0.7593 | 0.7589 | | 0.2795 | 40.23 | 7000 | 0.6325 | 0.7634 | 0.7629 | | 0.2753 | 41.38 | 7200 | 0.6252 | 0.7626 | 0.7621 | | 0.2725 | 42.53 | 7400 | 0.6288 | 0.7598 | 0.7593 | | 0.2677 | 43.68 | 7600 | 0.6609 | 0.7544 | 0.7539 | | 0.2641 | 44.83 | 7800 | 0.6607 | 0.7592 | 0.7589 | | 0.2631 | 45.98 | 8000 | 0.6491 | 0.7494 | 0.7488 | | 0.2561 | 47.13 | 8200 | 0.6762 | 0.7568 | 0.7564 | | 0.2575 | 48.28 | 8400 | 0.6790 | 0.7489 | 0.7485 | | 0.2553 | 49.43 | 8600 | 0.6813 | 0.7464 | 0.7460 | | 0.2532 | 50.57 | 8800 | 0.6796 | 0.7554 | 0.7549 | | 0.2533 | 51.72 | 9000 | 0.6673 | 0.7543 | 0.7539 | | 0.246 | 52.87 | 9200 | 0.6832 | 0.7511 | 0.7506 | | 0.2484 | 54.02 | 9400 | 0.6774 | 0.7533 | 0.7528 | | 0.2451 | 55.17 | 9600 | 0.6841 | 0.7543 | 0.7539 | | 0.2451 | 56.32 | 9800 | 0.6777 | 0.7551 | 0.7546 | | 0.2412 | 57.47 | 10000 | 0.6790 | 0.7544 | 0.7539 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T03:20:51+00:00
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Critical Dream - cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.3 <Gallery /> ## Model description These are cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.3 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0, for the purposes of generating images for the [Critical Dream](https://github.com/cosmicBboy/critical-dream) project. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True. Special VAE used for training: stabilityai/sdxl-vae. ## Trigger words You should use a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus" to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.3/tree/main) them in the Files & versions tab. ## Tracker run link https://wandb.ai/nielsbantilan/dreambooth-lora-sd-xl/runs/thnecpj9 ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora", "template:sd-lora"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "prompt": "a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus\"", "widget": [{"text": "a picture of [dm-matt-mercer]", "output": {"url": "image_0.png"}}, {"text": "a picture of [dm-matt-mercer]", "output": {"url": "image_1.png"}}, {"text": "a picture of a dungeon master.", "output": {"url": "image_2.png"}}, {"text": "a picture of a dungeon master.", "output": {"url": "image_3.png"}}, {"text": "a picture of [critrole-fjord], a male half-orc warlock. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_4.png"}}, {"text": "a picture of [critrole-fjord], a male half-orc warlock. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_5.png"}}, {"text": "a picture of a male half-orc warlock", "output": {"url": "image_6.png"}}, {"text": "a picture of a male half-orc warlock", "output": {"url": "image_7.png"}}, {"text": "a picture of [critrole-beau], a female human monk. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_8.png"}}, {"text": "a picture of [critrole-beau], a female human monk. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_9.png"}}, {"text": "a picture of a female human monk", "output": {"url": "image_10.png"}}, {"text": "a picture of a female human monk", "output": {"url": "image_11.png"}}, {"text": "a picture of [critrole-caduceus], a male firbolg cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_12.png"}}, {"text": "a picture of [critrole-caduceus], a male firbolg cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_13.png"}}, {"text": "a picture of a male firbolg cleric", "output": {"url": "image_14.png"}}, {"text": "a picture of a male firbolg cleric", "output": {"url": "image_15.png"}}, {"text": "a picture of [critrole-caleb], a male human wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_16.png"}}, {"text": "a picture of [critrole-caleb], a male human wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_17.png"}}, {"text": "a picture of a male human wizard", "output": {"url": "image_18.png"}}, {"text": "a picture of a male human wizard", "output": {"url": "image_19.png"}}, {"text": "a picture of [critrole-jester], a female tiefling cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_20.png"}}, {"text": "a picture of [critrole-jester], a female tiefling cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_21.png"}}, {"text": "a picture of a female tiefling cleric", "output": {"url": "image_22.png"}}, {"text": "a picture of a female tiefling cleric", "output": {"url": "image_23.png"}}, {"text": "a picture of [critrole-nott], a female goblin rogue. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_24.png"}}, {"text": "a picture of [critrole-nott], a female goblin rogue. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_25.png"}}, {"text": "a picture of a female goblin rogue", "output": {"url": "image_26.png"}}, {"text": "a picture of a female goblin rogue", "output": {"url": "image_27.png"}}, {"text": "a picture of [critrole-veth], a female halfling rogue/wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_28.png"}}, {"text": "a picture of [critrole-veth], a female halfling rogue/wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_29.png"}}, {"text": "a picture of a female halfling rogue/wizard", "output": {"url": "image_30.png"}}, {"text": "a picture of a female halfling rogue/wizard", "output": {"url": "image_31.png"}}, {"text": "a picture of [critrole-yasha], a female aasimar barbarian. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_32.png"}}, {"text": "a picture of [critrole-yasha], a female aasimar barbarian. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_33.png"}}, {"text": "a picture of a female aasimar barbarian", "output": {"url": "image_34.png"}}, {"text": "a picture of a female aasimar barbarian", "output": {"url": "image_35.png"}}, {"text": "a picture of [critrole-mollymauk], a male tiefling blood hunter. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_36.png"}}, {"text": "a picture of [critrole-mollymauk], a male tiefling blood hunter. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_37.png"}}, {"text": "a picture of a male tiefling blood hunter", "output": {"url": "image_38.png"}}, {"text": "a picture of a male tiefling blood hunter", "output": {"url": "image_39.png"}}, {"text": "a picture of [critrole-essek], a male drow wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_40.png"}}, {"text": "a picture of [critrole-essek], a male drow wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_41.png"}}, {"text": "a picture of a male drow wizard", "output": {"url": "image_42.png"}}, {"text": "a picture of a male drow wizard", "output": {"url": "image_43.png"}}]}
cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.3
null
[ "diffusers", "text-to-image", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-27T03:21:15+00:00
null
null
{}
anabioz/a
null
[ "region:us" ]
null
2024-04-27T03:21:18+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3835 - Precision: 0.6242 - Recall: 0.6563 - F1: 0.6399 - Accuracy: 0.9043 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 368 | 0.3060 | 0.5800 | 0.6174 | 0.5981 | 0.8963 | | 0.2936 | 2.0 | 736 | 0.2901 | 0.6033 | 0.6240 | 0.6135 | 0.8992 | | 0.2936 | 3.0 | 1104 | 0.3063 | 0.6304 | 0.6364 | 0.6334 | 0.9052 | | 0.1156 | 4.0 | 1472 | 0.3404 | 0.6293 | 0.6563 | 0.6425 | 0.9033 | | 0.1156 | 5.0 | 1840 | 0.3835 | 0.6242 | 0.6563 | 0.6399 | 0.9043 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "bert-base-uncased", "model-index": [{"name": "results", "results": []}]}
manish1103125/results
null
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:21:32+00:00
null
null
{}
anabioz/b
null
[ "region:us" ]
null
2024-04-27T03:22:01+00:00
null
null
{}
anabioz/c
null
[ "region:us" ]
null
2024-04-27T03:22:06+00:00
null
null
{}
anabioz/d
null
[ "region:us" ]
null
2024-04-27T03:22:09+00:00
null
null
{}
anabioz/e
null
[ "region:us" ]
null
2024-04-27T03:22:14+00:00
null
null
{}
anabioz/f
null
[ "region:us" ]
null
2024-04-27T03:22:17+00:00
null
null
{}
anabioz/h
null
[ "region:us" ]
null
2024-04-27T03:22:24+00:00
null
null
{}
anabioz/i
null
[ "region:us" ]
null
2024-04-27T03:22:31+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me3-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.5855 - F1 Score: 0.6926 - Accuracy: 0.6929 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6611 | 0.87 | 200 | 0.6361 | 0.6438 | 0.6435 | | 0.6221 | 1.74 | 400 | 0.6143 | 0.6660 | 0.6660 | | 0.6062 | 2.61 | 600 | 0.6063 | 0.6707 | 0.6704 | | 0.5939 | 3.48 | 800 | 0.6001 | 0.6788 | 0.6785 | | 0.5887 | 4.35 | 1000 | 0.6005 | 0.6763 | 0.6764 | | 0.5844 | 5.22 | 1200 | 0.5994 | 0.6740 | 0.6772 | | 0.5804 | 6.09 | 1400 | 0.6114 | 0.6712 | 0.6755 | | 0.5754 | 6.96 | 1600 | 0.5962 | 0.6802 | 0.6807 | | 0.5673 | 7.83 | 1800 | 0.6015 | 0.6832 | 0.6829 | | 0.5705 | 8.7 | 2000 | 0.6033 | 0.6827 | 0.6826 | | 0.5603 | 9.57 | 2200 | 0.5888 | 0.6866 | 0.6864 | | 0.563 | 10.43 | 2400 | 0.5926 | 0.6934 | 0.6932 | | 0.5561 | 11.3 | 2600 | 0.5848 | 0.6911 | 0.6916 | | 0.5567 | 12.17 | 2800 | 0.5865 | 0.6857 | 0.6856 | | 0.5531 | 13.04 | 3000 | 0.5878 | 0.6938 | 0.6935 | | 0.549 | 13.91 | 3200 | 0.5881 | 0.6899 | 0.6897 | | 0.543 | 14.78 | 3400 | 0.5935 | 0.6905 | 0.6908 | | 0.5421 | 15.65 | 3600 | 0.5829 | 0.6992 | 0.6989 | | 0.5387 | 16.52 | 3800 | 0.5842 | 0.6934 | 0.6932 | | 0.5373 | 17.39 | 4000 | 0.5919 | 0.6952 | 0.6954 | | 0.5384 | 18.26 | 4200 | 0.5845 | 0.6952 | 0.6954 | | 0.5325 | 19.13 | 4400 | 0.5920 | 0.7038 | 0.7035 | | 0.5312 | 20.0 | 4600 | 0.5839 | 0.7006 | 0.7008 | | 0.5317 | 20.87 | 4800 | 0.5872 | 0.7006 | 0.7008 | | 0.527 | 21.74 | 5000 | 0.5901 | 0.6967 | 0.6967 | | 0.5234 | 22.61 | 5200 | 0.5887 | 0.7060 | 0.7057 | | 0.5251 | 23.48 | 5400 | 0.6010 | 0.6930 | 0.6954 | | 0.5206 | 24.35 | 5600 | 0.5889 | 0.6974 | 0.6973 | | 0.5227 | 25.22 | 5800 | 0.5965 | 0.6996 | 0.6997 | | 0.5139 | 26.09 | 6000 | 0.6060 | 0.6994 | 0.7 | | 0.519 | 26.96 | 6200 | 0.5925 | 0.6994 | 0.7003 | | 0.514 | 27.83 | 6400 | 0.6074 | 0.6966 | 0.6986 | | 0.5142 | 28.7 | 6600 | 0.5919 | 0.7015 | 0.7014 | | 0.5129 | 29.57 | 6800 | 0.5962 | 0.7016 | 0.7014 | | 0.5069 | 30.43 | 7000 | 0.5923 | 0.7062 | 0.7065 | | 0.5132 | 31.3 | 7200 | 0.6009 | 0.6981 | 0.6984 | | 0.5065 | 32.17 | 7400 | 0.6015 | 0.6985 | 0.6986 | | 0.508 | 33.04 | 7600 | 0.5950 | 0.6975 | 0.6976 | | 0.5101 | 33.91 | 7800 | 0.5959 | 0.7003 | 0.7008 | | 0.5028 | 34.78 | 8000 | 0.6005 | 0.6991 | 0.6989 | | 0.5043 | 35.65 | 8200 | 0.6004 | 0.6992 | 0.6992 | | 0.5052 | 36.52 | 8400 | 0.5988 | 0.7013 | 0.7014 | | 0.5001 | 37.39 | 8600 | 0.6034 | 0.6981 | 0.6978 | | 0.4996 | 38.26 | 8800 | 0.6048 | 0.6971 | 0.6976 | | 0.5049 | 39.13 | 9000 | 0.6043 | 0.6998 | 0.7 | | 0.5001 | 40.0 | 9200 | 0.6024 | 0.7026 | 0.7024 | | 0.4987 | 40.87 | 9400 | 0.6031 | 0.6970 | 0.6967 | | 0.4975 | 41.74 | 9600 | 0.6039 | 0.6999 | 0.6997 | | 0.5044 | 42.61 | 9800 | 0.6008 | 0.7012 | 0.7011 | | 0.4979 | 43.48 | 10000 | 0.6025 | 0.7017 | 0.7016 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T03:22:47+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me3-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.5785 - F1 Score: 0.6982 - Accuracy: 0.6984 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6693 | 0.87 | 200 | 0.6534 | 0.6222 | 0.6236 | | 0.6427 | 1.74 | 400 | 0.6303 | 0.6502 | 0.6503 | | 0.6307 | 2.61 | 600 | 0.6191 | 0.6628 | 0.6625 | | 0.6147 | 3.48 | 800 | 0.6102 | 0.6633 | 0.6633 | | 0.6125 | 4.35 | 1000 | 0.6106 | 0.6576 | 0.6592 | | 0.6062 | 5.22 | 1200 | 0.6167 | 0.6607 | 0.6639 | | 0.6032 | 6.09 | 1400 | 0.6192 | 0.6533 | 0.6584 | | 0.5988 | 6.96 | 1600 | 0.6095 | 0.6696 | 0.6701 | | 0.5959 | 7.83 | 1800 | 0.6071 | 0.6671 | 0.6677 | | 0.5949 | 8.7 | 2000 | 0.6028 | 0.6736 | 0.6734 | | 0.5888 | 9.57 | 2200 | 0.5976 | 0.6785 | 0.6783 | | 0.5926 | 10.43 | 2400 | 0.5974 | 0.6799 | 0.6796 | | 0.5889 | 11.3 | 2600 | 0.5984 | 0.6801 | 0.6810 | | 0.5877 | 12.17 | 2800 | 0.5987 | 0.6783 | 0.6780 | | 0.587 | 13.04 | 3000 | 0.5950 | 0.6806 | 0.6804 | | 0.5847 | 13.91 | 3200 | 0.5936 | 0.6816 | 0.6815 | | 0.5823 | 14.78 | 3400 | 0.5943 | 0.6798 | 0.6807 | | 0.5816 | 15.65 | 3600 | 0.5929 | 0.6830 | 0.6832 | | 0.5793 | 16.52 | 3800 | 0.5972 | 0.6814 | 0.6815 | | 0.5786 | 17.39 | 4000 | 0.5914 | 0.6868 | 0.6867 | | 0.5773 | 18.26 | 4200 | 0.5954 | 0.6863 | 0.6861 | | 0.576 | 19.13 | 4400 | 0.5976 | 0.6855 | 0.6853 | | 0.5754 | 20.0 | 4600 | 0.5908 | 0.6883 | 0.6886 | | 0.578 | 20.87 | 4800 | 0.5926 | 0.6828 | 0.6829 | | 0.5744 | 21.74 | 5000 | 0.5937 | 0.6859 | 0.6864 | | 0.5723 | 22.61 | 5200 | 0.5884 | 0.6909 | 0.6908 | | 0.5747 | 23.48 | 5400 | 0.5952 | 0.6837 | 0.6853 | | 0.5696 | 24.35 | 5600 | 0.5902 | 0.6907 | 0.6905 | | 0.5742 | 25.22 | 5800 | 0.5922 | 0.6866 | 0.6878 | | 0.5682 | 26.09 | 6000 | 0.5960 | 0.6856 | 0.6864 | | 0.5728 | 26.96 | 6200 | 0.5908 | 0.6881 | 0.6889 | | 0.5687 | 27.83 | 6400 | 0.5986 | 0.6824 | 0.6851 | | 0.5667 | 28.7 | 6600 | 0.5913 | 0.6876 | 0.6880 | | 0.5675 | 29.57 | 6800 | 0.5865 | 0.6906 | 0.6905 | | 0.5655 | 30.43 | 7000 | 0.5901 | 0.6881 | 0.6891 | | 0.5702 | 31.3 | 7200 | 0.5908 | 0.6847 | 0.6856 | | 0.5655 | 32.17 | 7400 | 0.5908 | 0.6875 | 0.6883 | | 0.5673 | 33.04 | 7600 | 0.5842 | 0.6899 | 0.6899 | | 0.567 | 33.91 | 7800 | 0.5884 | 0.6889 | 0.6894 | | 0.5643 | 34.78 | 8000 | 0.5900 | 0.6898 | 0.6899 | | 0.5648 | 35.65 | 8200 | 0.5865 | 0.6928 | 0.6929 | | 0.5646 | 36.52 | 8400 | 0.5887 | 0.6902 | 0.6908 | | 0.5655 | 37.39 | 8600 | 0.5885 | 0.6903 | 0.6905 | | 0.5614 | 38.26 | 8800 | 0.5922 | 0.6897 | 0.6905 | | 0.5687 | 39.13 | 9000 | 0.5876 | 0.6902 | 0.6910 | | 0.5637 | 40.0 | 9200 | 0.5869 | 0.6919 | 0.6921 | | 0.561 | 40.87 | 9400 | 0.5883 | 0.6917 | 0.6916 | | 0.5598 | 41.74 | 9600 | 0.5889 | 0.6922 | 0.6924 | | 0.5699 | 42.61 | 9800 | 0.5862 | 0.6905 | 0.6908 | | 0.5619 | 43.48 | 10000 | 0.5868 | 0.6917 | 0.6918 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T03:22:47+00:00
null
null
{}
anabioz/g
null
[ "region:us" ]
null
2024-04-27T03:22:48+00:00
null
null
{"license": "artistic-2.0"}
kcz017/dargelos
null
[ "license:artistic-2.0", "region:us" ]
null
2024-04-27T03:23:29+00:00
text-generation
transformers
# Qwen1.5-110B-Chat ## About Quantization 我们使用modelscope [swift](https://github.com/modelscope/swift/)仓库进行AWQ量化. 量化文档可以查看[这里](https://github.com/modelscope/swift/blob/main/docs/source/LLM/LLM%E9%87%8F%E5%8C%96%E6%96%87%E6%A1%A3.md). 量化命令如下: We use the modelscope [swift](https://github.com/modelscope/swift/) repository to perform GPTQ quantization. Quantization documentation can be found [here](https://github.com/modelscope/swift/blob/main/docs/source_en/LLM/LLM-quantization.md). The quantization command is as follows: ```bash CUDA_VISIBLE_DEVICES=0 swift export \ --model_type qwen1half-110b-chat --quant_bits 4 \ --dataset sharegpt-gpt4-mini alpaca-zh alpaca-en \ --quant_method awq --quant_seqlen 8192 --quant_n_samples 512 ``` ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 9 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B, 72B, and 110B dense models, and an MoE model of 14B with 2.7B activated; * Significant performance improvement in human preference for chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). <br> ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B and 110B) and the mixture of SWA and full attention. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "study-hjt/Qwen1.5-110B-Chat-AWQ", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("study-hjt/Qwen1.5-110B-Chat-AWQ") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Tips * If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ```
{"language": ["en"], "license": "other", "tags": ["chat", "qwen", "awq", "int4", "4bits"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE", "pipeline_tag": "text-generation"}
study-hjt/Qwen1.5-110B-Chat-AWQ
null
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "qwen", "awq", "int4", "4bits", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-27T03:24:04+00:00
token-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
manish1103125/NER-Task1
null
[ "transformers", "safetensors", "bert", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:24:24+00:00
null
transformers
# Uploaded model - **Developed by:** vutuka - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
vutuka/llama-3-8b-african-aya-lora
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:26:54+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
sherrys/426_mistral_RAFT_50e_10s
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T03:27:29+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/x1dccfy
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:29:18+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/vumpzdo
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:29:18+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/blgymh6
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:29:18+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/9xu4ir3
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:29:18+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/ko2vss3
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:29:18+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/18jgts7
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:29:18+00:00
null
null
{}
kcz017/AdrianDargelos
null
[ "region:us" ]
null
2024-04-27T03:30:56+00:00
null
null
{"license": "apache-2.0"}
jojeong/detr_traffic_light
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-27T03:31:00+00:00
null
transformers
# Uploaded model - **Developed by:** vutuka - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en", "sw", "af", "fr", "yo", "am", "ar", "pt", "ig"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "datasets": ["vutuka/aya_african_alpaca"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
vutuka/llama-3-8b-african-aya-gguf-8bit
null
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "sw", "af", "fr", "yo", "am", "ar", "pt", "ig", "dataset:vutuka/aya_african_alpaca", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:36:39+00:00
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="liqiu0202/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
liqiu0202/q-FrozenLake-v1-4x4-noSlippery
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-27T03:39:10+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/adapter-llama-adaptertoxic2nontoxic-100-filtered-50-0.003
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:39:13+00:00