Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
text-generation
transformers
{}
WilliamStar/eli5_clm-model
null
[ "transformers", "pytorch", "tensorboard", "pegasus", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:23:48+00:00
null
null
{}
barrybadpak/hmbarend
null
[ "region:us" ]
null
2024-04-27T06:24:32+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_2-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.4885 - F1 Score: 0.7820 - Accuracy: 0.782 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5649 | 1.34 | 200 | 0.5222 | 0.7425 | 0.744 | | 0.5198 | 2.68 | 400 | 0.5167 | 0.7509 | 0.752 | | 0.5037 | 4.03 | 600 | 0.5180 | 0.7536 | 0.755 | | 0.4921 | 5.37 | 800 | 0.5142 | 0.7540 | 0.755 | | 0.4849 | 6.71 | 1000 | 0.5136 | 0.7554 | 0.756 | | 0.4747 | 8.05 | 1200 | 0.4985 | 0.7513 | 0.752 | | 0.4639 | 9.4 | 1400 | 0.5182 | 0.7415 | 0.742 | | 0.4579 | 10.74 | 1600 | 0.5201 | 0.7453 | 0.746 | | 0.4514 | 12.08 | 1800 | 0.5115 | 0.7490 | 0.749 | | 0.4421 | 13.42 | 2000 | 0.5250 | 0.7394 | 0.74 | | 0.4354 | 14.77 | 2200 | 0.5126 | 0.7477 | 0.748 | | 0.4234 | 16.11 | 2400 | 0.5338 | 0.7438 | 0.744 | | 0.4153 | 17.45 | 2600 | 0.5287 | 0.7510 | 0.751 | | 0.4061 | 18.79 | 2800 | 0.5258 | 0.7430 | 0.743 | | 0.3981 | 20.13 | 3000 | 0.5438 | 0.7620 | 0.762 | | 0.3902 | 21.48 | 3200 | 0.5514 | 0.7394 | 0.74 | | 0.383 | 22.82 | 3400 | 0.5512 | 0.7478 | 0.748 | | 0.3701 | 24.16 | 3600 | 0.5570 | 0.7279 | 0.728 | | 0.3634 | 25.5 | 3800 | 0.5536 | 0.7439 | 0.744 | | 0.3577 | 26.85 | 4000 | 0.5462 | 0.7460 | 0.746 | | 0.3516 | 28.19 | 4200 | 0.5881 | 0.7377 | 0.738 | | 0.3421 | 29.53 | 4400 | 0.6056 | 0.7303 | 0.731 | | 0.3324 | 30.87 | 4600 | 0.5947 | 0.7438 | 0.744 | | 0.3313 | 32.21 | 4800 | 0.5837 | 0.7400 | 0.74 | | 0.3203 | 33.56 | 5000 | 0.6170 | 0.7379 | 0.738 | | 0.3184 | 34.9 | 5200 | 0.6058 | 0.7290 | 0.729 | | 0.3133 | 36.24 | 5400 | 0.5874 | 0.7400 | 0.74 | | 0.3059 | 37.58 | 5600 | 0.6140 | 0.7398 | 0.74 | | 0.3015 | 38.93 | 5800 | 0.6045 | 0.7309 | 0.731 | | 0.296 | 40.27 | 6000 | 0.6256 | 0.7308 | 0.731 | | 0.293 | 41.61 | 6200 | 0.6169 | 0.7249 | 0.725 | | 0.2827 | 42.95 | 6400 | 0.6515 | 0.7380 | 0.738 | | 0.2781 | 44.3 | 6600 | 0.6570 | 0.7299 | 0.73 | | 0.2796 | 45.64 | 6800 | 0.6887 | 0.7287 | 0.729 | | 0.2751 | 46.98 | 7000 | 0.6530 | 0.7289 | 0.729 | | 0.2708 | 48.32 | 7200 | 0.6750 | 0.7290 | 0.729 | | 0.2673 | 49.66 | 7400 | 0.6700 | 0.7288 | 0.729 | | 0.2631 | 51.01 | 7600 | 0.6750 | 0.73 | 0.73 | | 0.2541 | 52.35 | 7800 | 0.6998 | 0.7340 | 0.734 | | 0.2572 | 53.69 | 8000 | 0.6742 | 0.7370 | 0.737 | | 0.2539 | 55.03 | 8200 | 0.6811 | 0.7390 | 0.739 | | 0.251 | 56.38 | 8400 | 0.6732 | 0.7369 | 0.737 | | 0.2468 | 57.72 | 8600 | 0.7015 | 0.7320 | 0.732 | | 0.2459 | 59.06 | 8800 | 0.6816 | 0.7340 | 0.734 | | 0.245 | 60.4 | 9000 | 0.7022 | 0.7339 | 0.734 | | 0.2397 | 61.74 | 9200 | 0.7028 | 0.7289 | 0.729 | | 0.2396 | 63.09 | 9400 | 0.7151 | 0.7298 | 0.73 | | 0.2366 | 64.43 | 9600 | 0.7071 | 0.7330 | 0.733 | | 0.2438 | 65.77 | 9800 | 0.7062 | 0.7309 | 0.731 | | 0.2363 | 67.11 | 10000 | 0.7061 | 0.7319 | 0.732 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_2-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_2-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T06:24:42+00:00
null
null
{}
thusinh1969/LLaMA-2-finetune-EP2-DPO-25APRIL2024-16bit-gguf
null
[ "gguf", "region:us" ]
null
2024-04-27T06:24:51+00:00
text-generation
transformers
# Qwen1.5-110B-Chat-AWQ ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 9 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B, 72B, and 110B dense models, and an MoE model of 14B with 2.7B activated; * Significant performance improvement in human preference for chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). <br> ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B and 110B) and the mixture of SWA and full attention. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen1.5-110B-Chat-AWQ", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-110B-Chat-AWQ") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Tips * If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ```
{"language": ["en"], "license": "other", "tags": ["chat"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/Qwen1.5-110B-Chat-AWQ/blob/main/LICENSE", "pipeline_tag": "text-generation"}
Qwen/Qwen1.5-110B-Chat-AWQ
null
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-27T06:25:13+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
swj0419/bbc_retrain_new_STEP0000100
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:25:41+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_virus_covid-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset. It achieves the following results on the evaluation set: - Loss: 1.2289 - F1 Score: 0.5468 - Accuracy: 0.5496 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 2.1855 | 0.35 | 200 | 2.1850 | 0.0777 | 0.1281 | | 2.1796 | 0.7 | 400 | 2.1773 | 0.0786 | 0.1305 | | 2.1694 | 1.05 | 600 | 2.1674 | 0.1004 | 0.1358 | | 2.1535 | 1.4 | 800 | 2.1291 | 0.1234 | 0.1776 | | 2.1097 | 1.75 | 1000 | 2.0424 | 0.1789 | 0.2265 | | 2.031 | 2.09 | 1200 | 1.9501 | 0.2330 | 0.2619 | | 1.9484 | 2.44 | 1400 | 1.8227 | 0.2932 | 0.3053 | | 1.876 | 2.79 | 1600 | 1.7586 | 0.3244 | 0.3364 | | 1.8259 | 3.14 | 1800 | 1.7019 | 0.3421 | 0.3559 | | 1.7812 | 3.49 | 2000 | 1.6691 | 0.3666 | 0.3740 | | 1.754 | 3.84 | 2200 | 1.6100 | 0.3945 | 0.4045 | | 1.7057 | 4.19 | 2400 | 1.5670 | 0.4220 | 0.4194 | | 1.6663 | 4.54 | 2600 | 1.5210 | 0.4305 | 0.4334 | | 1.6469 | 4.89 | 2800 | 1.5190 | 0.4305 | 0.4318 | | 1.6263 | 5.24 | 3000 | 1.4904 | 0.4349 | 0.4422 | | 1.6046 | 5.58 | 3200 | 1.4649 | 0.4517 | 0.4554 | | 1.5793 | 5.93 | 3400 | 1.4500 | 0.4442 | 0.4518 | | 1.5689 | 6.28 | 3600 | 1.4389 | 0.4618 | 0.4596 | | 1.5559 | 6.63 | 3800 | 1.4115 | 0.4620 | 0.4696 | | 1.5339 | 6.98 | 4000 | 1.3988 | 0.4715 | 0.4851 | | 1.5257 | 7.33 | 4200 | 1.3822 | 0.4841 | 0.4923 | | 1.5065 | 7.68 | 4400 | 1.3691 | 0.4873 | 0.4920 | | 1.4975 | 8.03 | 4600 | 1.3517 | 0.4955 | 0.5023 | | 1.4805 | 8.38 | 4800 | 1.3445 | 0.4912 | 0.4993 | | 1.4796 | 8.73 | 5000 | 1.3267 | 0.5133 | 0.5179 | | 1.4511 | 9.08 | 5200 | 1.3267 | 0.5066 | 0.5062 | | 1.4485 | 9.42 | 5400 | 1.3009 | 0.5179 | 0.5251 | | 1.4423 | 9.77 | 5600 | 1.2948 | 0.5202 | 0.5275 | | 1.4405 | 10.12 | 5800 | 1.2897 | 0.5204 | 0.5236 | | 1.4335 | 10.47 | 6000 | 1.2751 | 0.5303 | 0.5329 | | 1.4257 | 10.82 | 6200 | 1.2725 | 0.5306 | 0.5333 | | 1.3988 | 11.17 | 6400 | 1.2673 | 0.5330 | 0.5350 | | 1.4113 | 11.52 | 6600 | 1.2662 | 0.5356 | 0.5357 | | 1.4073 | 11.87 | 6800 | 1.2548 | 0.5383 | 0.5384 | | 1.4015 | 12.22 | 7000 | 1.2573 | 0.5343 | 0.5373 | | 1.3847 | 12.57 | 7200 | 1.2444 | 0.5417 | 0.5445 | | 1.3905 | 12.91 | 7400 | 1.2465 | 0.5384 | 0.5398 | | 1.3904 | 13.26 | 7600 | 1.2347 | 0.5432 | 0.5434 | | 1.3764 | 13.61 | 7800 | 1.2385 | 0.5463 | 0.5444 | | 1.3763 | 13.96 | 8000 | 1.2293 | 0.5449 | 0.5466 | | 1.3708 | 14.31 | 8200 | 1.2276 | 0.5451 | 0.5481 | | 1.3686 | 14.66 | 8400 | 1.2254 | 0.5482 | 0.5480 | | 1.3699 | 15.01 | 8600 | 1.2273 | 0.5449 | 0.5508 | | 1.3725 | 15.36 | 8800 | 1.2182 | 0.5528 | 0.5539 | | 1.3484 | 15.71 | 9000 | 1.2193 | 0.5482 | 0.5516 | | 1.3594 | 16.06 | 9200 | 1.2163 | 0.5486 | 0.5514 | | 1.3608 | 16.4 | 9400 | 1.2147 | 0.5478 | 0.5516 | | 1.3575 | 16.75 | 9600 | 1.2145 | 0.5505 | 0.5527 | | 1.3553 | 17.1 | 9800 | 1.2140 | 0.5500 | 0.5525 | | 1.358 | 17.45 | 10000 | 1.2140 | 0.5519 | 0.5549 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_virus_covid-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_virus_covid-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T06:26:29+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_virus_covid-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset. It achieves the following results on the evaluation set: - Loss: 1.6492 - F1 Score: 0.3916 - Accuracy: 0.3854 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 2.1856 | 0.35 | 200 | 2.1860 | 0.0460 | 0.1191 | | 2.1822 | 0.7 | 400 | 2.1840 | 0.0599 | 0.1231 | | 2.1777 | 1.05 | 600 | 2.1814 | 0.0780 | 0.1308 | | 2.1743 | 1.4 | 800 | 2.1715 | 0.0743 | 0.1401 | | 2.167 | 1.75 | 1000 | 2.1623 | 0.0936 | 0.1480 | | 2.1605 | 2.09 | 1200 | 2.1612 | 0.1058 | 0.1560 | | 2.1515 | 2.44 | 1400 | 2.1490 | 0.1463 | 0.1608 | | 2.1381 | 2.79 | 1600 | 2.1187 | 0.1520 | 0.1872 | | 2.1201 | 3.14 | 1800 | 2.1000 | 0.1576 | 0.2019 | | 2.1026 | 3.49 | 2000 | 2.0667 | 0.1722 | 0.2203 | | 2.0866 | 3.84 | 2200 | 2.0280 | 0.2038 | 0.2406 | | 2.0574 | 4.19 | 2400 | 1.9980 | 0.2195 | 0.2421 | | 2.0332 | 4.54 | 2600 | 1.9691 | 0.2224 | 0.2566 | | 2.0145 | 4.89 | 2800 | 1.9464 | 0.2542 | 0.2722 | | 1.9963 | 5.24 | 3000 | 1.9197 | 0.2488 | 0.2815 | | 1.9832 | 5.58 | 3200 | 1.8955 | 0.2638 | 0.2917 | | 1.9536 | 5.93 | 3400 | 1.8678 | 0.2993 | 0.3152 | | 1.9413 | 6.28 | 3600 | 1.8402 | 0.3140 | 0.3217 | | 1.9241 | 6.63 | 3800 | 1.8249 | 0.3058 | 0.3198 | | 1.9091 | 6.98 | 4000 | 1.7995 | 0.3194 | 0.3322 | | 1.897 | 7.33 | 4200 | 1.7836 | 0.3233 | 0.3352 | | 1.8756 | 7.68 | 4400 | 1.7592 | 0.3454 | 0.3498 | | 1.8677 | 8.03 | 4600 | 1.7630 | 0.3215 | 0.3314 | | 1.856 | 8.38 | 4800 | 1.7384 | 0.3302 | 0.3465 | | 1.8508 | 8.73 | 5000 | 1.7255 | 0.3445 | 0.3526 | | 1.8347 | 9.08 | 5200 | 1.7255 | 0.3522 | 0.3518 | | 1.8283 | 9.42 | 5400 | 1.7108 | 0.3478 | 0.3608 | | 1.8247 | 9.77 | 5600 | 1.7034 | 0.3530 | 0.3613 | | 1.8133 | 10.12 | 5800 | 1.6961 | 0.3608 | 0.3680 | | 1.8155 | 10.47 | 6000 | 1.6899 | 0.3659 | 0.3654 | | 1.8112 | 10.82 | 6200 | 1.6830 | 0.3615 | 0.3646 | | 1.7961 | 11.17 | 6400 | 1.6881 | 0.3563 | 0.3582 | | 1.7989 | 11.52 | 6600 | 1.6829 | 0.3712 | 0.3691 | | 1.7956 | 11.87 | 6800 | 1.6736 | 0.3713 | 0.3728 | | 1.7853 | 12.22 | 7000 | 1.6661 | 0.3705 | 0.3707 | | 1.7802 | 12.57 | 7200 | 1.6657 | 0.3784 | 0.3768 | | 1.7843 | 12.91 | 7400 | 1.6640 | 0.3764 | 0.3782 | | 1.7861 | 13.26 | 7600 | 1.6617 | 0.3813 | 0.3799 | | 1.7732 | 13.61 | 7800 | 1.6594 | 0.3840 | 0.3787 | | 1.7761 | 13.96 | 8000 | 1.6559 | 0.3790 | 0.3755 | | 1.7699 | 14.31 | 8200 | 1.6545 | 0.3815 | 0.3833 | | 1.7722 | 14.66 | 8400 | 1.6481 | 0.3865 | 0.3846 | | 1.7709 | 15.01 | 8600 | 1.6509 | 0.3806 | 0.3818 | | 1.7755 | 15.36 | 8800 | 1.6469 | 0.3876 | 0.3833 | | 1.7549 | 15.71 | 9000 | 1.6479 | 0.3843 | 0.3838 | | 1.7576 | 16.06 | 9200 | 1.6445 | 0.3873 | 0.3848 | | 1.7721 | 16.4 | 9400 | 1.6436 | 0.3875 | 0.3871 | | 1.7559 | 16.75 | 9600 | 1.6441 | 0.3861 | 0.3840 | | 1.7599 | 17.1 | 9800 | 1.6441 | 0.3872 | 0.3864 | | 1.765 | 17.45 | 10000 | 1.6439 | 0.3874 | 0.3864 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_virus_covid-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_virus_covid-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T06:26:29+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_tata-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset. It achieves the following results on the evaluation set: - Loss: 0.4922 - F1 Score: 0.8009 - Accuracy: 0.8010 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6111 | 5.13 | 200 | 0.5537 | 0.7272 | 0.7325 | | 0.5084 | 10.26 | 400 | 0.5295 | 0.7550 | 0.7553 | | 0.4743 | 15.38 | 600 | 0.5202 | 0.7415 | 0.7423 | | 0.4584 | 20.51 | 800 | 0.4863 | 0.7668 | 0.7667 | | 0.4425 | 25.64 | 1000 | 0.4875 | 0.7684 | 0.7684 | | 0.4327 | 30.77 | 1200 | 0.4837 | 0.7747 | 0.7749 | | 0.4214 | 35.9 | 1400 | 0.4651 | 0.7929 | 0.7928 | | 0.4151 | 41.03 | 1600 | 0.4625 | 0.7978 | 0.7977 | | 0.4071 | 46.15 | 1800 | 0.4724 | 0.7859 | 0.7863 | | 0.4005 | 51.28 | 2000 | 0.4603 | 0.7879 | 0.7879 | | 0.3943 | 56.41 | 2200 | 0.4533 | 0.7946 | 0.7945 | | 0.3896 | 61.54 | 2400 | 0.4716 | 0.7875 | 0.7879 | | 0.3821 | 66.67 | 2600 | 0.4654 | 0.8025 | 0.8026 | | 0.3798 | 71.79 | 2800 | 0.4583 | 0.8043 | 0.8042 | | 0.3754 | 76.92 | 3000 | 0.4688 | 0.8103 | 0.8108 | | 0.3718 | 82.05 | 3200 | 0.4531 | 0.8092 | 0.8091 | | 0.3685 | 87.18 | 3400 | 0.4774 | 0.8036 | 0.8042 | | 0.366 | 92.31 | 3600 | 0.4550 | 0.8124 | 0.8124 | | 0.3609 | 97.44 | 3800 | 0.4492 | 0.8173 | 0.8173 | | 0.3546 | 102.56 | 4000 | 0.4583 | 0.8174 | 0.8173 | | 0.3538 | 107.69 | 4200 | 0.4712 | 0.8105 | 0.8108 | | 0.3495 | 112.82 | 4400 | 0.4596 | 0.8223 | 0.8222 | | 0.3476 | 117.95 | 4600 | 0.4492 | 0.8223 | 0.8222 | | 0.3417 | 123.08 | 4800 | 0.4569 | 0.8174 | 0.8173 | | 0.343 | 128.21 | 5000 | 0.4498 | 0.8207 | 0.8206 | | 0.3413 | 133.33 | 5200 | 0.4471 | 0.8223 | 0.8222 | | 0.3361 | 138.46 | 5400 | 0.4447 | 0.8239 | 0.8238 | | 0.3351 | 143.59 | 5600 | 0.4510 | 0.8239 | 0.8238 | | 0.331 | 148.72 | 5800 | 0.4490 | 0.8223 | 0.8222 | | 0.3257 | 153.85 | 6000 | 0.4513 | 0.8256 | 0.8254 | | 0.3248 | 158.97 | 6200 | 0.4563 | 0.8256 | 0.8254 | | 0.3277 | 164.1 | 6400 | 0.4537 | 0.8239 | 0.8238 | | 0.3237 | 169.23 | 6600 | 0.4527 | 0.8207 | 0.8206 | | 0.3262 | 174.36 | 6800 | 0.4558 | 0.8190 | 0.8189 | | 0.3174 | 179.49 | 7000 | 0.4537 | 0.8207 | 0.8206 | | 0.3173 | 184.62 | 7200 | 0.4505 | 0.8222 | 0.8222 | | 0.3155 | 189.74 | 7400 | 0.4557 | 0.8223 | 0.8222 | | 0.3122 | 194.87 | 7600 | 0.4555 | 0.8223 | 0.8222 | | 0.3162 | 200.0 | 7800 | 0.4558 | 0.8191 | 0.8189 | | 0.3153 | 205.13 | 8000 | 0.4537 | 0.8256 | 0.8254 | | 0.3071 | 210.26 | 8200 | 0.4576 | 0.8239 | 0.8238 | | 0.3123 | 215.38 | 8400 | 0.4560 | 0.8256 | 0.8254 | | 0.3053 | 220.51 | 8600 | 0.4578 | 0.8223 | 0.8222 | | 0.3072 | 225.64 | 8800 | 0.4606 | 0.8256 | 0.8254 | | 0.3081 | 230.77 | 9000 | 0.4583 | 0.8239 | 0.8238 | | 0.3066 | 235.9 | 9200 | 0.4589 | 0.8239 | 0.8238 | | 0.306 | 241.03 | 9400 | 0.4593 | 0.8239 | 0.8238 | | 0.3068 | 246.15 | 9600 | 0.4602 | 0.8239 | 0.8238 | | 0.306 | 251.28 | 9800 | 0.4595 | 0.8239 | 0.8238 | | 0.3071 | 256.41 | 10000 | 0.4592 | 0.8256 | 0.8254 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T06:26:41+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_virus_covid-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset. It achieves the following results on the evaluation set: - Loss: 1.0241 - F1 Score: 0.6185 - Accuracy: 0.6155 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 2.1851 | 0.35 | 200 | 2.1813 | 0.0821 | 0.1275 | | 2.1775 | 0.7 | 400 | 2.1701 | 0.0978 | 0.1440 | | 2.1488 | 1.05 | 600 | 2.1058 | 0.1574 | 0.1842 | | 2.0473 | 1.4 | 800 | 1.9125 | 0.2244 | 0.2708 | | 1.8692 | 1.75 | 1000 | 1.7204 | 0.3289 | 0.3473 | | 1.7448 | 2.09 | 1200 | 1.6459 | 0.3590 | 0.3839 | | 1.6821 | 2.44 | 1400 | 1.5493 | 0.3897 | 0.4111 | | 1.6134 | 2.79 | 1600 | 1.5078 | 0.4089 | 0.4310 | | 1.5675 | 3.14 | 1800 | 1.4539 | 0.4367 | 0.4582 | | 1.5234 | 3.49 | 2000 | 1.4160 | 0.4521 | 0.4626 | | 1.4991 | 3.84 | 2200 | 1.3871 | 0.4674 | 0.4772 | | 1.4587 | 4.19 | 2400 | 1.3459 | 0.4940 | 0.4974 | | 1.4268 | 4.54 | 2600 | 1.3173 | 0.4931 | 0.5076 | | 1.4163 | 4.89 | 2800 | 1.2876 | 0.5040 | 0.5169 | | 1.3838 | 5.24 | 3000 | 1.2806 | 0.5090 | 0.5181 | | 1.3637 | 5.58 | 3200 | 1.2468 | 0.5258 | 0.5297 | | 1.3358 | 5.93 | 3400 | 1.2424 | 0.5215 | 0.5291 | | 1.3196 | 6.28 | 3600 | 1.2202 | 0.5368 | 0.5413 | | 1.3075 | 6.63 | 3800 | 1.1931 | 0.5407 | 0.5541 | | 1.2941 | 6.98 | 4000 | 1.1811 | 0.5410 | 0.5470 | | 1.2761 | 7.33 | 4200 | 1.1674 | 0.5603 | 0.5616 | | 1.263 | 7.68 | 4400 | 1.1502 | 0.5599 | 0.5655 | | 1.2595 | 8.03 | 4600 | 1.1492 | 0.5653 | 0.5681 | | 1.2293 | 8.38 | 4800 | 1.1303 | 0.5633 | 0.5715 | | 1.238 | 8.73 | 5000 | 1.1224 | 0.5725 | 0.5719 | | 1.2202 | 9.08 | 5200 | 1.1197 | 0.5782 | 0.5748 | | 1.2084 | 9.42 | 5400 | 1.1105 | 0.5813 | 0.5826 | | 1.2058 | 9.77 | 5600 | 1.0964 | 0.5816 | 0.5830 | | 1.1931 | 10.12 | 5800 | 1.0859 | 0.5912 | 0.5883 | | 1.1906 | 10.47 | 6000 | 1.0810 | 0.5909 | 0.5889 | | 1.1791 | 10.82 | 6200 | 1.0744 | 0.5976 | 0.5936 | | 1.1562 | 11.17 | 6400 | 1.0731 | 0.5945 | 0.5940 | | 1.1669 | 11.52 | 6600 | 1.0689 | 0.6019 | 0.5973 | | 1.1696 | 11.87 | 6800 | 1.0601 | 0.5996 | 0.5968 | | 1.1597 | 12.22 | 7000 | 1.0579 | 0.6047 | 0.6016 | | 1.1496 | 12.57 | 7200 | 1.0557 | 0.5999 | 0.5966 | | 1.1548 | 12.91 | 7400 | 1.0510 | 0.6041 | 0.6006 | | 1.1411 | 13.26 | 7600 | 1.0528 | 0.6037 | 0.5991 | | 1.1441 | 13.61 | 7800 | 1.0499 | 0.6110 | 0.6041 | | 1.1352 | 13.96 | 8000 | 1.0411 | 0.6079 | 0.6054 | | 1.1289 | 14.31 | 8200 | 1.0378 | 0.6108 | 0.6069 | | 1.1323 | 14.66 | 8400 | 1.0389 | 0.6059 | 0.6045 | | 1.129 | 15.01 | 8600 | 1.0371 | 0.6070 | 0.6050 | | 1.1341 | 15.36 | 8800 | 1.0289 | 0.6143 | 0.6102 | | 1.1156 | 15.71 | 9000 | 1.0308 | 0.6106 | 0.6069 | | 1.1211 | 16.06 | 9200 | 1.0270 | 0.6124 | 0.6082 | | 1.1208 | 16.4 | 9400 | 1.0282 | 0.6119 | 0.6077 | | 1.1166 | 16.75 | 9600 | 1.0263 | 0.6132 | 0.6070 | | 1.122 | 17.1 | 9800 | 1.0263 | 0.6110 | 0.6096 | | 1.1184 | 17.45 | 10000 | 1.0254 | 0.6118 | 0.6100 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_virus_covid-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_virus_covid-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T06:26:42+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_tata-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset. It achieves the following results on the evaluation set: - Loss: 0.4809 - F1 Score: 0.8026 - Accuracy: 0.8026 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5603 | 5.13 | 200 | 0.5109 | 0.7689 | 0.7700 | | 0.4608 | 10.26 | 400 | 0.4759 | 0.7863 | 0.7863 | | 0.4277 | 15.38 | 600 | 0.4688 | 0.7846 | 0.7847 | | 0.4068 | 20.51 | 800 | 0.4610 | 0.7913 | 0.7912 | | 0.3844 | 25.64 | 1000 | 0.4805 | 0.7975 | 0.7977 | | 0.3659 | 30.77 | 1200 | 0.4757 | 0.8141 | 0.8140 | | 0.3487 | 35.9 | 1400 | 0.4714 | 0.8157 | 0.8157 | | 0.3338 | 41.03 | 1600 | 0.4738 | 0.8239 | 0.8238 | | 0.3212 | 46.15 | 1800 | 0.4840 | 0.8158 | 0.8157 | | 0.3048 | 51.28 | 2000 | 0.4868 | 0.8189 | 0.8189 | | 0.2977 | 56.41 | 2200 | 0.5045 | 0.8125 | 0.8124 | | 0.2848 | 61.54 | 2400 | 0.5315 | 0.8092 | 0.8091 | | 0.2743 | 66.67 | 2600 | 0.5168 | 0.8190 | 0.8189 | | 0.267 | 71.79 | 2800 | 0.5303 | 0.8109 | 0.8108 | | 0.2593 | 76.92 | 3000 | 0.5355 | 0.8125 | 0.8124 | | 0.2459 | 82.05 | 3200 | 0.5562 | 0.8090 | 0.8091 | | 0.2479 | 87.18 | 3400 | 0.5495 | 0.8010 | 0.8010 | | 0.2395 | 92.31 | 3600 | 0.5365 | 0.8060 | 0.8059 | | 0.2284 | 97.44 | 3800 | 0.5581 | 0.8025 | 0.8026 | | 0.2217 | 102.56 | 4000 | 0.6187 | 0.7810 | 0.7814 | | 0.2173 | 107.69 | 4200 | 0.6077 | 0.7894 | 0.7896 | | 0.213 | 112.82 | 4400 | 0.5782 | 0.8042 | 0.8042 | | 0.2079 | 117.95 | 4600 | 0.5814 | 0.7946 | 0.7945 | | 0.2045 | 123.08 | 4800 | 0.5928 | 0.7962 | 0.7961 | | 0.1952 | 128.21 | 5000 | 0.6255 | 0.7974 | 0.7977 | | 0.1916 | 133.33 | 5200 | 0.6154 | 0.8011 | 0.8010 | | 0.1882 | 138.46 | 5400 | 0.6214 | 0.8011 | 0.8010 | | 0.1841 | 143.59 | 5600 | 0.6540 | 0.7992 | 0.7993 | | 0.1739 | 148.72 | 5800 | 0.6606 | 0.7995 | 0.7993 | | 0.1734 | 153.85 | 6000 | 0.6523 | 0.8044 | 0.8042 | | 0.1741 | 158.97 | 6200 | 0.6775 | 0.8043 | 0.8042 | | 0.171 | 164.1 | 6400 | 0.6521 | 0.8093 | 0.8091 | | 0.1666 | 169.23 | 6600 | 0.6671 | 0.8028 | 0.8026 | | 0.1672 | 174.36 | 6800 | 0.6838 | 0.8042 | 0.8042 | | 0.1629 | 179.49 | 7000 | 0.6794 | 0.7962 | 0.7961 | | 0.1623 | 184.62 | 7200 | 0.6745 | 0.7995 | 0.7993 | | 0.156 | 189.74 | 7400 | 0.7068 | 0.7930 | 0.7928 | | 0.1523 | 194.87 | 7600 | 0.7110 | 0.7946 | 0.7945 | | 0.1504 | 200.0 | 7800 | 0.7096 | 0.7962 | 0.7961 | | 0.1505 | 205.13 | 8000 | 0.7144 | 0.7929 | 0.7928 | | 0.1483 | 210.26 | 8200 | 0.7163 | 0.7962 | 0.7961 | | 0.1485 | 215.38 | 8400 | 0.7113 | 0.7897 | 0.7896 | | 0.1486 | 220.51 | 8600 | 0.7065 | 0.7930 | 0.7928 | | 0.148 | 225.64 | 8800 | 0.7195 | 0.7962 | 0.7961 | | 0.1472 | 230.77 | 9000 | 0.7241 | 0.7880 | 0.7879 | | 0.1439 | 235.9 | 9200 | 0.7255 | 0.7946 | 0.7945 | | 0.1436 | 241.03 | 9400 | 0.7192 | 0.7979 | 0.7977 | | 0.1448 | 246.15 | 9600 | 0.7189 | 0.7946 | 0.7945 | | 0.144 | 251.28 | 9800 | 0.7211 | 0.7929 | 0.7928 | | 0.144 | 256.41 | 10000 | 0.7181 | 0.7995 | 0.7993 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T06:27:16+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/dvr76d6
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:28:11+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
toan-ly/vinallama-peft-7b-chatbot
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:30:58+00:00
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) bloomz-7b1 - bnb 4bits - Model creator: https://huggingface.co/bigscience/ - Original model: https://huggingface.co/bigscience/bloomz-7b1/ Original model description: --- datasets: - bigscience/xP3 license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zu programming_language: - C - C++ - C# - Go - Java - JavaScript - Lua - PHP - Python - Ruby - Rust - Scala - TypeScript pipeline_tag: text-generation widget: - text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous review as positive, neutral or negative?" example_title: "zh-en sentiment" - text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?" example_title: "zh-zh sentiment" - text: "Suggest at least five related search terms to \"Mạng neural nhân tạo\"." example_title: "vi-en query" - text: "Proposez au moins cinq mots clés concernant «Réseau de neurones artificiels»." example_title: "fr-fr query" - text: "Explain in a sentence in Telugu what is backpropagation in neural networks." example_title: "te-en qa" - text: "Why is the sky blue?" example_title: "en-en qa" - text: "Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is \"Heroes Come in All Shapes and Sizes\". Story (in Spanish):" example_title: "es-en fable" - text: "Write a fable about wood elves living in a forest that is suddenly invaded by ogres. The fable is a masterpiece that has achieved praise worldwide and its moral is \"Violence is the last refuge of the incompetent\". Fable (in Hindi):" example_title: "hi-en fable" model-index: - name: bloomz-7b1 results: - task: type: Coreference resolution dataset: type: winogrande name: Winogrande XL (xl) config: xl split: validation revision: a80f460359d1e9a67c006011c94de42a8759430c metrics: - type: Accuracy value: 55.8 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (en) config: en split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 66.02 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (fr) config: fr split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 57.83 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (jp) config: jp split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 52.87 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (pt) config: pt split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 57.79 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (ru) config: ru split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 54.92 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (zh) config: zh split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 63.69 - task: type: Natural language inference dataset: type: anli name: ANLI (r1) config: r1 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 42.1 - task: type: Natural language inference dataset: type: anli name: ANLI (r2) config: r2 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 39.5 - task: type: Natural language inference dataset: type: anli name: ANLI (r3) config: r3 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 41.0 - task: type: Natural language inference dataset: type: super_glue name: SuperGLUE (cb) config: cb split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 80.36 - task: type: Natural language inference dataset: type: super_glue name: SuperGLUE (rte) config: rte split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 84.12 - task: type: Natural language inference dataset: type: xnli name: XNLI (ar) config: ar split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 53.25 - task: type: Natural language inference dataset: type: xnli name: XNLI (bg) config: bg split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 43.61 - task: type: Natural language inference dataset: type: xnli name: XNLI (de) config: de split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 46.83 - task: type: Natural language inference dataset: type: xnli name: XNLI (el) config: el split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 41.53 - task: type: Natural language inference dataset: type: xnli name: XNLI (en) config: en split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 59.68 - task: type: Natural language inference dataset: type: xnli name: XNLI (es) config: es split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 55.1 - task: type: Natural language inference dataset: type: xnli name: XNLI (fr) config: fr split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 55.26 - task: type: Natural language inference dataset: type: xnli name: XNLI (hi) config: hi split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 50.88 - task: type: Natural language inference dataset: type: xnli name: XNLI (ru) config: ru split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 47.75 - task: type: Natural language inference dataset: type: xnli name: XNLI (sw) config: sw split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 46.63 - task: type: Natural language inference dataset: type: xnli name: XNLI (th) config: th split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 40.12 - task: type: Natural language inference dataset: type: xnli name: XNLI (tr) config: tr split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 37.55 - task: type: Natural language inference dataset: type: xnli name: XNLI (ur) config: ur split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 46.51 - task: type: Natural language inference dataset: type: xnli name: XNLI (vi) config: vi split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 52.93 - task: type: Natural language inference dataset: type: xnli name: XNLI (zh) config: zh split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 53.61 - task: type: Program synthesis dataset: type: openai_humaneval name: HumanEval config: None split: test revision: e8dc562f5de170c54b5481011dd9f4fa04845771 metrics: - type: Pass@1 value: 8.06 - type: Pass@10 value: 15.03 - type: Pass@100 value: 27.49 - task: type: Sentence completion dataset: type: story_cloze name: StoryCloze (2016) config: "2016" split: validation revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db metrics: - type: Accuracy value: 90.43 - task: type: Sentence completion dataset: type: super_glue name: SuperGLUE (copa) config: copa split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 86.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (et) config: et split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 50.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (ht) config: ht split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 54.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (id) config: id split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 76.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (it) config: it split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 61.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (qu) config: qu split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 60.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (sw) config: sw split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 63.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (ta) config: ta split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 64.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (th) config: th split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 57.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (tr) config: tr split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 53.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (vi) config: vi split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 79.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (zh) config: zh split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 81.0 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (ar) config: ar split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 83.26 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (es) config: es split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 88.95 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (eu) config: eu split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 73.33 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (hi) config: hi split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 80.61 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (id) config: id split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 84.25 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (my) config: my split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 52.55 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (ru) config: ru split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 65.32 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (sw) config: sw split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 71.67 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (te) config: te split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 74.72 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (zh) config: zh split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 85.37 --- ![xmtf](https://github.com/bigscience-workshop/xmtf/blob/master/xmtf_banner.png?raw=true) # Table of Contents 1. [Model Summary](#model-summary) 2. [Use](#use) 3. [Limitations](#limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 7. [Citation](#citation) # Model Summary > We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages. - **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf) - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786) - **Point of Contact:** [Niklas Muennighoff](mailto:[email protected]) - **Languages:** Refer to [bloom](https://huggingface.co/bigscience/bloom) for pretraining & [xP3](https://huggingface.co/datasets/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages. - **BLOOMZ & mT0 Model Family:** <div class="max-w-full overflow-auto"> <table> <tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English. </tr> <tr> <td>Parameters</td> <td>300M</td> <td>580M</td> <td>1.2B</td> <td>3.7B</td> <td>13B</td> <td>560M</td> <td>1.1B</td> <td>1.7B</td> <td>3B</td> <td>7.1B</td> <td>176B</td> </tr> <tr> <td>Finetuned Model</td> <td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td> <td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td> <td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td> <td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td> </tr> </tr> <tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th> </tr> <tr> <td>Finetuned Model</td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td> </tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th> </tr> <tr> <td>Finetuned Model</td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td> </tr> <th colspan="12">Original pretrained checkpoints. Not recommended.</th> <tr> <td>Pretrained Model</td> <td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td> <td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td> <td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td> <td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td> <td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td> <td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td> <td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td> <td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td> <td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td> <td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td> <td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td> </tr> </table> </div> # Use ## Intended use We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper: - 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评? - Suggest at least five related search terms to "Mạng neural nhân tạo". - Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish): - Explain in a sentence in Telugu what is backpropagation in neural networks. **Feel free to share your generations in the Community tab!** ## How to use ### CPU <details> <summary> Click to expand </summary> ```python # pip install -q transformers from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigscience/bloomz-7b1" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint) inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> ### GPU <details> <summary> Click to expand </summary> ```python # pip install -q transformers accelerate from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigscience/bloomz-7b1" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto") inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> ### GPU in 8bit <details> <summary> Click to expand </summary> ```python # pip install -q transformers accelerate bitsandbytes from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigscience/bloomz-7b1" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True) inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> <!-- Necessary for whitespace --> ### # Limitations **Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*". # Training ## Model - **Architecture:** Same as [bloom-7b1](https://huggingface.co/bigscience/bloom-7b1), also refer to the `config.json` file - **Finetuning steps:** 1000 - **Finetuning tokens:** 4.19 billion - **Finetuning layout:** 1x pipeline parallel, 1x tensor parallel, 64x data parallel - **Precision:** float16 ## Hardware - **CPUs:** AMD CPUs with 512GB memory per node - **GPUs:** 64 A100 80GB GPUs with 8 GPUs per node (8 nodes) using NVLink 4 inter-gpu connects, 4 OmniPath links - **Communication:** NCCL-communications network with a fully dedicated subnet ## Software - **Orchestration:** [Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed) - **Optimizer & parallelism:** [DeepSpeed](https://github.com/microsoft/DeepSpeed) - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) (pytorch-1.11 w/ CUDA-11.5) - **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex) # Evaluation We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config. # Citation ```bibtex @article{muennighoff2022crosslingual, title={Crosslingual generalization through multitask finetuning}, author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others}, journal={arXiv preprint arXiv:2211.01786}, year={2022} } ```
{}
RichardErkhov/bigscience_-_bloomz-7b1-4bits
null
[ "transformers", "safetensors", "bloom", "text-generation", "arxiv:2211.01786", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-27T06:32:23+00:00
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - embracellm/sushi_LoRA <Gallery /> ## Model description These are embracellm/sushi_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sushi to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](embracellm/sushi_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of sushi", "widget": []}
embracellm/sushi_LoRA
null
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-27T06:32:28+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_tata-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset. It achieves the following results on the evaluation set: - Loss: 0.6245 - F1 Score: 0.7798 - Accuracy: 0.7798 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5281 | 5.13 | 200 | 0.4844 | 0.7875 | 0.7879 | | 0.4329 | 10.26 | 400 | 0.4901 | 0.7693 | 0.7700 | | 0.3869 | 15.38 | 600 | 0.4822 | 0.7960 | 0.7961 | | 0.3489 | 20.51 | 800 | 0.4849 | 0.7995 | 0.7993 | | 0.3155 | 25.64 | 1000 | 0.5261 | 0.8043 | 0.8042 | | 0.2837 | 30.77 | 1200 | 0.5394 | 0.8027 | 0.8026 | | 0.2574 | 35.9 | 1400 | 0.5679 | 0.8026 | 0.8026 | | 0.229 | 41.03 | 1600 | 0.5776 | 0.8092 | 0.8091 | | 0.2094 | 46.15 | 1800 | 0.5861 | 0.7928 | 0.7928 | | 0.1835 | 51.28 | 2000 | 0.6079 | 0.8092 | 0.8091 | | 0.1678 | 56.41 | 2200 | 0.6691 | 0.8011 | 0.8010 | | 0.1497 | 61.54 | 2400 | 0.7839 | 0.7742 | 0.7749 | | 0.1367 | 66.67 | 2600 | 0.7662 | 0.7962 | 0.7961 | | 0.1267 | 71.79 | 2800 | 0.7840 | 0.7832 | 0.7830 | | 0.121 | 76.92 | 3000 | 0.8157 | 0.7880 | 0.7879 | | 0.1092 | 82.05 | 3200 | 0.8645 | 0.7864 | 0.7863 | | 0.1085 | 87.18 | 3400 | 0.7989 | 0.7962 | 0.7961 | | 0.0993 | 92.31 | 3600 | 0.8623 | 0.8024 | 0.8026 | | 0.0921 | 97.44 | 3800 | 0.8916 | 0.7895 | 0.7896 | | 0.0861 | 102.56 | 4000 | 0.9362 | 0.7897 | 0.7896 | | 0.0837 | 107.69 | 4200 | 0.9484 | 0.7910 | 0.7912 | | 0.0773 | 112.82 | 4400 | 0.9369 | 0.8011 | 0.8010 | | 0.0721 | 117.95 | 4600 | 0.9656 | 0.7995 | 0.7993 | | 0.0721 | 123.08 | 4800 | 1.0188 | 0.7944 | 0.7945 | | 0.0675 | 128.21 | 5000 | 0.9916 | 0.7978 | 0.7977 | | 0.0659 | 133.33 | 5200 | 0.9771 | 0.8060 | 0.8059 | | 0.0602 | 138.46 | 5400 | 1.0305 | 0.7863 | 0.7863 | | 0.0589 | 143.59 | 5600 | 1.0362 | 0.7979 | 0.7977 | | 0.0583 | 148.72 | 5800 | 1.0196 | 0.7994 | 0.7993 | | 0.055 | 153.85 | 6000 | 1.0837 | 0.8011 | 0.8010 | | 0.0537 | 158.97 | 6200 | 1.1688 | 0.7977 | 0.7977 | | 0.0561 | 164.1 | 6400 | 1.0659 | 0.8060 | 0.8059 | | 0.0508 | 169.23 | 6600 | 1.1277 | 0.7959 | 0.7961 | | 0.05 | 174.36 | 6800 | 1.0920 | 0.7913 | 0.7912 | | 0.0493 | 179.49 | 7000 | 1.0955 | 0.8044 | 0.8042 | | 0.0482 | 184.62 | 7200 | 1.1218 | 0.7978 | 0.7977 | | 0.0462 | 189.74 | 7400 | 1.1239 | 0.7930 | 0.7928 | | 0.0446 | 194.87 | 7600 | 1.1725 | 0.7962 | 0.7961 | | 0.041 | 200.0 | 7800 | 1.2086 | 0.7992 | 0.7993 | | 0.0435 | 205.13 | 8000 | 1.1534 | 0.7962 | 0.7961 | | 0.0435 | 210.26 | 8200 | 1.1784 | 0.8043 | 0.8042 | | 0.0423 | 215.38 | 8400 | 1.1516 | 0.7962 | 0.7961 | | 0.0386 | 220.51 | 8600 | 1.1916 | 0.7929 | 0.7928 | | 0.0407 | 225.64 | 8800 | 1.1814 | 0.7995 | 0.7993 | | 0.0411 | 230.77 | 9000 | 1.1773 | 0.8011 | 0.8010 | | 0.0406 | 235.9 | 9200 | 1.1888 | 0.8011 | 0.8010 | | 0.0369 | 241.03 | 9400 | 1.1865 | 0.8060 | 0.8059 | | 0.0372 | 246.15 | 9600 | 1.1899 | 0.8011 | 0.8010 | | 0.0366 | 251.28 | 9800 | 1.1979 | 0.7995 | 0.7993 | | 0.0375 | 256.41 | 10000 | 1.2061 | 0.7995 | 0.7993 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T06:32:36+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
swj0419/bbc_retrain_new_STEP0000150
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:33:57+00:00
text-generation
transformers
{}
Konthee/HoogBERTa-Decoder
null
[ "transformers", "pytorch", "roberta", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:33:59+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
PIXMELT/Qwarte7B-v0.1-dev3-merged
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-27T06:34:19+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_notata-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset. It achieves the following results on the evaluation set: - Loss: 0.1287 - F1 Score: 0.9512 - Accuracy: 0.9512 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.3741 | 0.6 | 200 | 0.1942 | 0.9212 | 0.9212 | | 0.2083 | 1.2 | 400 | 0.1616 | 0.9374 | 0.9374 | | 0.1855 | 1.81 | 600 | 0.1442 | 0.9442 | 0.9442 | | 0.1611 | 2.41 | 800 | 0.1339 | 0.9457 | 0.9457 | | 0.1556 | 3.01 | 1000 | 0.1369 | 0.9454 | 0.9454 | | 0.1454 | 3.61 | 1200 | 0.1297 | 0.9478 | 0.9478 | | 0.1474 | 4.22 | 1400 | 0.1292 | 0.9482 | 0.9482 | | 0.1403 | 4.82 | 1600 | 0.1205 | 0.9525 | 0.9525 | | 0.1363 | 5.42 | 1800 | 0.1262 | 0.9508 | 0.9508 | | 0.1328 | 6.02 | 2000 | 0.1309 | 0.9484 | 0.9484 | | 0.1359 | 6.63 | 2200 | 0.1201 | 0.9518 | 0.9518 | | 0.1316 | 7.23 | 2400 | 0.1174 | 0.9519 | 0.9520 | | 0.1265 | 7.83 | 2600 | 0.1174 | 0.9538 | 0.9538 | | 0.1325 | 8.43 | 2800 | 0.1160 | 0.9538 | 0.9538 | | 0.1287 | 9.04 | 3000 | 0.1138 | 0.9561 | 0.9561 | | 0.1264 | 9.64 | 3200 | 0.1295 | 0.9523 | 0.9523 | | 0.1275 | 10.24 | 3400 | 0.1133 | 0.9555 | 0.9555 | | 0.1265 | 10.84 | 3600 | 0.1142 | 0.9553 | 0.9553 | | 0.1232 | 11.45 | 3800 | 0.1166 | 0.9546 | 0.9546 | | 0.1235 | 12.05 | 4000 | 0.1148 | 0.9544 | 0.9544 | | 0.1242 | 12.65 | 4200 | 0.1169 | 0.9529 | 0.9529 | | 0.1244 | 13.25 | 4400 | 0.1161 | 0.9555 | 0.9555 | | 0.1219 | 13.86 | 4600 | 0.1144 | 0.9542 | 0.9542 | | 0.1231 | 14.46 | 4800 | 0.1146 | 0.9561 | 0.9561 | | 0.1196 | 15.06 | 5000 | 0.1142 | 0.9557 | 0.9557 | | 0.1197 | 15.66 | 5200 | 0.1144 | 0.9561 | 0.9561 | | 0.1212 | 16.27 | 5400 | 0.1137 | 0.9559 | 0.9559 | | 0.1172 | 16.87 | 5600 | 0.1140 | 0.9561 | 0.9561 | | 0.1172 | 17.47 | 5800 | 0.1099 | 0.9567 | 0.9567 | | 0.1221 | 18.07 | 6000 | 0.1106 | 0.9553 | 0.9553 | | 0.1191 | 18.67 | 6200 | 0.1146 | 0.9555 | 0.9555 | | 0.1198 | 19.28 | 6400 | 0.1131 | 0.9561 | 0.9561 | | 0.1167 | 19.88 | 6600 | 0.1117 | 0.9570 | 0.9570 | | 0.1224 | 20.48 | 6800 | 0.1105 | 0.9576 | 0.9576 | | 0.1127 | 21.08 | 7000 | 0.1139 | 0.9561 | 0.9561 | | 0.1165 | 21.69 | 7200 | 0.1134 | 0.9550 | 0.9550 | | 0.1156 | 22.29 | 7400 | 0.1157 | 0.9544 | 0.9544 | | 0.1208 | 22.89 | 7600 | 0.1098 | 0.9563 | 0.9563 | | 0.1155 | 23.49 | 7800 | 0.1112 | 0.9567 | 0.9567 | | 0.1153 | 24.1 | 8000 | 0.1117 | 0.9567 | 0.9567 | | 0.1164 | 24.7 | 8200 | 0.1130 | 0.9567 | 0.9567 | | 0.117 | 25.3 | 8400 | 0.1115 | 0.9563 | 0.9563 | | 0.1149 | 25.9 | 8600 | 0.1107 | 0.9559 | 0.9559 | | 0.1163 | 26.51 | 8800 | 0.1107 | 0.9568 | 0.9568 | | 0.1155 | 27.11 | 9000 | 0.1109 | 0.9570 | 0.9570 | | 0.1152 | 27.71 | 9200 | 0.1108 | 0.9567 | 0.9567 | | 0.1142 | 28.31 | 9400 | 0.1098 | 0.9567 | 0.9567 | | 0.1192 | 28.92 | 9600 | 0.1112 | 0.9567 | 0.9567 | | 0.1124 | 29.52 | 9800 | 0.1106 | 0.9567 | 0.9567 | | 0.1154 | 30.12 | 10000 | 0.1108 | 0.9567 | 0.9567 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T06:35:04+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/yxng8im
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:35:25+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/55p1wba
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:35:25+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/9cd8j0p
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:35:25+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/es5km0l
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:35:25+00:00
null
null
{}
pruning/2ft1i1x
null
[ "region:us" ]
null
2024-04-27T06:35:25+00:00
null
null
{}
pruning/ugx03qg
null
[ "region:us" ]
null
2024-04-27T06:35:25+00:00
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
NegarSH/mt5-Quran-QA
null
[ "transformers", "safetensors", "mt5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:36:26+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_notata-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset. It achieves the following results on the evaluation set: - Loss: 0.1210 - F1 Score: 0.9552 - Accuracy: 0.9552 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.306 | 0.6 | 200 | 0.1533 | 0.9397 | 0.9397 | | 0.1652 | 1.2 | 400 | 0.1343 | 0.9478 | 0.9478 | | 0.15 | 1.81 | 600 | 0.1212 | 0.9516 | 0.9516 | | 0.1345 | 2.41 | 800 | 0.1172 | 0.9529 | 0.9529 | | 0.1338 | 3.01 | 1000 | 0.1208 | 0.9529 | 0.9529 | | 0.1265 | 3.61 | 1200 | 0.1133 | 0.9546 | 0.9546 | | 0.129 | 4.22 | 1400 | 0.1146 | 0.9555 | 0.9555 | | 0.1249 | 4.82 | 1600 | 0.1114 | 0.9555 | 0.9555 | | 0.1219 | 5.42 | 1800 | 0.1151 | 0.9559 | 0.9559 | | 0.118 | 6.02 | 2000 | 0.1146 | 0.9557 | 0.9557 | | 0.1221 | 6.63 | 2200 | 0.1112 | 0.9576 | 0.9576 | | 0.1184 | 7.23 | 2400 | 0.1087 | 0.9593 | 0.9593 | | 0.1118 | 7.83 | 2600 | 0.1095 | 0.9578 | 0.9578 | | 0.1187 | 8.43 | 2800 | 0.1111 | 0.9593 | 0.9593 | | 0.1146 | 9.04 | 3000 | 0.1064 | 0.9593 | 0.9593 | | 0.1128 | 9.64 | 3200 | 0.1305 | 0.9512 | 0.9512 | | 0.1134 | 10.24 | 3400 | 0.1059 | 0.9602 | 0.9602 | | 0.1123 | 10.84 | 3600 | 0.1118 | 0.9563 | 0.9563 | | 0.1083 | 11.45 | 3800 | 0.1091 | 0.9578 | 0.9578 | | 0.109 | 12.05 | 4000 | 0.1098 | 0.9578 | 0.9578 | | 0.1084 | 12.65 | 4200 | 0.1076 | 0.9585 | 0.9585 | | 0.1103 | 13.25 | 4400 | 0.1103 | 0.9589 | 0.9589 | | 0.1059 | 13.86 | 4600 | 0.1068 | 0.9587 | 0.9587 | | 0.1077 | 14.46 | 4800 | 0.1097 | 0.9593 | 0.9593 | | 0.1037 | 15.06 | 5000 | 0.1100 | 0.9585 | 0.9585 | | 0.1042 | 15.66 | 5200 | 0.1055 | 0.9595 | 0.9595 | | 0.104 | 16.27 | 5400 | 0.1063 | 0.9602 | 0.9602 | | 0.1005 | 16.87 | 5600 | 0.1089 | 0.9601 | 0.9601 | | 0.1016 | 17.47 | 5800 | 0.1030 | 0.9599 | 0.9599 | | 0.1043 | 18.07 | 6000 | 0.1030 | 0.9599 | 0.9599 | | 0.1007 | 18.67 | 6200 | 0.1048 | 0.9593 | 0.9593 | | 0.1035 | 19.28 | 6400 | 0.1078 | 0.9585 | 0.9585 | | 0.0993 | 19.88 | 6600 | 0.1056 | 0.9593 | 0.9593 | | 0.1024 | 20.48 | 6800 | 0.1044 | 0.9610 | 0.9610 | | 0.0957 | 21.08 | 7000 | 0.1084 | 0.9601 | 0.9601 | | 0.0998 | 21.69 | 7200 | 0.1074 | 0.9599 | 0.9599 | | 0.0984 | 22.29 | 7400 | 0.1081 | 0.9595 | 0.9595 | | 0.102 | 22.89 | 7600 | 0.1030 | 0.9602 | 0.9602 | | 0.0981 | 23.49 | 7800 | 0.1085 | 0.9601 | 0.9601 | | 0.0969 | 24.1 | 8000 | 0.1047 | 0.9593 | 0.9593 | | 0.0976 | 24.7 | 8200 | 0.1051 | 0.9602 | 0.9602 | | 0.0983 | 25.3 | 8400 | 0.1041 | 0.9599 | 0.9599 | | 0.0957 | 25.9 | 8600 | 0.1044 | 0.9612 | 0.9612 | | 0.0979 | 26.51 | 8800 | 0.1041 | 0.9601 | 0.9601 | | 0.0963 | 27.11 | 9000 | 0.1037 | 0.9599 | 0.9599 | | 0.0964 | 27.71 | 9200 | 0.1049 | 0.9601 | 0.9601 | | 0.0951 | 28.31 | 9400 | 0.1037 | 0.9604 | 0.9604 | | 0.0992 | 28.92 | 9600 | 0.1050 | 0.9604 | 0.9604 | | 0.0934 | 29.52 | 9800 | 0.1045 | 0.9604 | 0.9604 | | 0.0961 | 30.12 | 10000 | 0.1045 | 0.9601 | 0.9601 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T06:38:24+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_notata-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset. It achieves the following results on the evaluation set: - Loss: 0.1218 - F1 Score: 0.9557 - Accuracy: 0.9557 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.2647 | 0.6 | 200 | 0.1403 | 0.9465 | 0.9465 | | 0.1464 | 1.2 | 400 | 0.1229 | 0.9510 | 0.9510 | | 0.1385 | 1.81 | 600 | 0.1137 | 0.9550 | 0.9550 | | 0.1267 | 2.41 | 800 | 0.1125 | 0.9546 | 0.9546 | | 0.1286 | 3.01 | 1000 | 0.1146 | 0.9550 | 0.9550 | | 0.1195 | 3.61 | 1200 | 0.1083 | 0.9589 | 0.9589 | | 0.1216 | 4.22 | 1400 | 0.1068 | 0.9574 | 0.9574 | | 0.1172 | 4.82 | 1600 | 0.1088 | 0.9576 | 0.9576 | | 0.1128 | 5.42 | 1800 | 0.1111 | 0.9578 | 0.9578 | | 0.1093 | 6.02 | 2000 | 0.1073 | 0.9597 | 0.9597 | | 0.1131 | 6.63 | 2200 | 0.1053 | 0.9599 | 0.9599 | | 0.1084 | 7.23 | 2400 | 0.1029 | 0.9608 | 0.9608 | | 0.1012 | 7.83 | 2600 | 0.1030 | 0.9610 | 0.9610 | | 0.1071 | 8.43 | 2800 | 0.1107 | 0.9587 | 0.9587 | | 0.1047 | 9.04 | 3000 | 0.1015 | 0.9614 | 0.9614 | | 0.1013 | 9.64 | 3200 | 0.1216 | 0.9548 | 0.9548 | | 0.0999 | 10.24 | 3400 | 0.1022 | 0.9595 | 0.9595 | | 0.1004 | 10.84 | 3600 | 0.1015 | 0.9602 | 0.9602 | | 0.0952 | 11.45 | 3800 | 0.1043 | 0.9608 | 0.9608 | | 0.0954 | 12.05 | 4000 | 0.1022 | 0.9604 | 0.9604 | | 0.0943 | 12.65 | 4200 | 0.1007 | 0.9629 | 0.9629 | | 0.0959 | 13.25 | 4400 | 0.1137 | 0.9585 | 0.9585 | | 0.0925 | 13.86 | 4600 | 0.1020 | 0.9606 | 0.9606 | | 0.093 | 14.46 | 4800 | 0.1067 | 0.9612 | 0.9612 | | 0.0901 | 15.06 | 5000 | 0.1043 | 0.9604 | 0.9604 | | 0.0874 | 15.66 | 5200 | 0.1017 | 0.9621 | 0.9621 | | 0.0879 | 16.27 | 5400 | 0.1044 | 0.9604 | 0.9604 | | 0.084 | 16.87 | 5600 | 0.1114 | 0.9582 | 0.9582 | | 0.0852 | 17.47 | 5800 | 0.1034 | 0.9599 | 0.9599 | | 0.0873 | 18.07 | 6000 | 0.1013 | 0.9614 | 0.9614 | | 0.0834 | 18.67 | 6200 | 0.1017 | 0.9612 | 0.9612 | | 0.0853 | 19.28 | 6400 | 0.1099 | 0.9580 | 0.9580 | | 0.0829 | 19.88 | 6600 | 0.1023 | 0.9636 | 0.9636 | | 0.0833 | 20.48 | 6800 | 0.1046 | 0.9606 | 0.9606 | | 0.0773 | 21.08 | 7000 | 0.1073 | 0.9597 | 0.9597 | | 0.0816 | 21.69 | 7200 | 0.1070 | 0.9584 | 0.9584 | | 0.0804 | 22.29 | 7400 | 0.1096 | 0.9582 | 0.9582 | | 0.0819 | 22.89 | 7600 | 0.1040 | 0.9595 | 0.9595 | | 0.078 | 23.49 | 7800 | 0.1102 | 0.9597 | 0.9597 | | 0.0755 | 24.1 | 8000 | 0.1048 | 0.9608 | 0.9608 | | 0.0777 | 24.7 | 8200 | 0.1072 | 0.9597 | 0.9597 | | 0.0777 | 25.3 | 8400 | 0.1028 | 0.9606 | 0.9606 | | 0.0749 | 25.9 | 8600 | 0.1052 | 0.9610 | 0.9610 | | 0.0772 | 26.51 | 8800 | 0.1042 | 0.9604 | 0.9604 | | 0.0752 | 27.11 | 9000 | 0.1054 | 0.9604 | 0.9604 | | 0.0751 | 27.71 | 9200 | 0.1083 | 0.9597 | 0.9597 | | 0.0741 | 28.31 | 9400 | 0.1055 | 0.9587 | 0.9587 | | 0.0783 | 28.92 | 9600 | 0.1082 | 0.9597 | 0.9597 | | 0.0721 | 29.52 | 9800 | 0.1080 | 0.9587 | 0.9587 | | 0.0742 | 30.12 | 10000 | 0.1066 | 0.9593 | 0.9593 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T06:38:42+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
liquid9212/x2h2lbi
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:39:28+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/e6372s9
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:39:32+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
quickstep3621/9dgq20g
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:39:58+00:00
text2text-generation
transformers
{}
lingvenvist/mtwsd-small
null
[ "transformers", "safetensors", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:40:12+00:00
text-classification
transformers
{}
mdRKK/RedBlueModelWrapper_Team9
null
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:41:20+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eli5_dir This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the eli5_category dataset. It achieves the following results on the evaluation set: - Loss: 3.5847 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.6947 | 1.0 | 1308 | 3.5892 | | 3.5793 | 2.0 | 2616 | 3.5833 | | 3.5287 | 3.0 | 3924 | 3.5847 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["eli5_category"], "base_model": "gpt2", "model-index": [{"name": "eli5_dir", "results": []}]}
BohanJiang0128/eli5_dir
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "dataset:eli5_category", "base_model:gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:41:48+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
swj0419/bbc_retrain_new_STEP0000200
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:42:32+00:00
null
null
{}
thusinh1969/LLaMA-2-finetune-EP2-DPO-25APRIL2024-Q4_K_M.gguf
null
[ "gguf", "region:us" ]
null
2024-04-27T06:43:27+00:00
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Audino/my-awesome-modelv4-large
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:43:40+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
uh1216/science-textbook-Llama3-8b-Instruct-10epoch
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:43:49+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_all-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.4266 - F1 Score: 0.8008 - Accuracy: 0.8008 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6011 | 0.54 | 200 | 0.5326 | 0.7295 | 0.7306 | | 0.532 | 1.08 | 400 | 0.4974 | 0.7648 | 0.7649 | | 0.5019 | 1.62 | 600 | 0.4776 | 0.7767 | 0.7772 | | 0.4809 | 2.16 | 800 | 0.4712 | 0.7804 | 0.7804 | | 0.4772 | 2.7 | 1000 | 0.4616 | 0.7859 | 0.7860 | | 0.469 | 3.24 | 1200 | 0.4596 | 0.7848 | 0.7848 | | 0.4633 | 3.78 | 1400 | 0.4576 | 0.7879 | 0.7880 | | 0.4536 | 4.32 | 1600 | 0.4634 | 0.7847 | 0.7850 | | 0.4557 | 4.86 | 1800 | 0.4565 | 0.7900 | 0.7902 | | 0.4529 | 5.41 | 2000 | 0.4567 | 0.7882 | 0.7883 | | 0.4481 | 5.95 | 2200 | 0.4560 | 0.7887 | 0.7887 | | 0.4489 | 6.49 | 2400 | 0.4533 | 0.7909 | 0.7910 | | 0.4459 | 7.03 | 2600 | 0.4501 | 0.7948 | 0.7948 | | 0.4464 | 7.57 | 2800 | 0.4559 | 0.7900 | 0.7902 | | 0.4387 | 8.11 | 3000 | 0.4543 | 0.7881 | 0.7885 | | 0.4407 | 8.65 | 3200 | 0.4469 | 0.7930 | 0.7931 | | 0.4426 | 9.19 | 3400 | 0.4500 | 0.7914 | 0.7916 | | 0.4389 | 9.73 | 3600 | 0.4554 | 0.7888 | 0.7895 | | 0.4423 | 10.27 | 3800 | 0.4492 | 0.7901 | 0.7904 | | 0.4386 | 10.81 | 4000 | 0.4468 | 0.7958 | 0.7958 | | 0.4383 | 11.35 | 4200 | 0.4490 | 0.7906 | 0.7909 | | 0.4352 | 11.89 | 4400 | 0.4487 | 0.7908 | 0.7912 | | 0.4361 | 12.43 | 4600 | 0.4434 | 0.7952 | 0.7953 | | 0.4325 | 12.97 | 4800 | 0.4480 | 0.7898 | 0.7904 | | 0.4349 | 13.51 | 5000 | 0.4555 | 0.7857 | 0.7870 | | 0.4338 | 14.05 | 5200 | 0.4417 | 0.7952 | 0.7953 | | 0.4314 | 14.59 | 5400 | 0.4436 | 0.7956 | 0.7956 | | 0.4315 | 15.14 | 5600 | 0.4405 | 0.7986 | 0.7986 | | 0.4361 | 15.68 | 5800 | 0.4447 | 0.7916 | 0.7919 | | 0.4261 | 16.22 | 6000 | 0.4475 | 0.7922 | 0.7927 | | 0.4335 | 16.76 | 6200 | 0.4419 | 0.7915 | 0.7919 | | 0.4343 | 17.3 | 6400 | 0.4423 | 0.7937 | 0.7941 | | 0.429 | 17.84 | 6600 | 0.4469 | 0.7918 | 0.7924 | | 0.4319 | 18.38 | 6800 | 0.4481 | 0.7936 | 0.7944 | | 0.4273 | 18.92 | 7000 | 0.4429 | 0.7914 | 0.7919 | | 0.4227 | 19.46 | 7200 | 0.4451 | 0.7938 | 0.7943 | | 0.4337 | 20.0 | 7400 | 0.4431 | 0.7927 | 0.7931 | | 0.4286 | 20.54 | 7600 | 0.4453 | 0.7927 | 0.7932 | | 0.4259 | 21.08 | 7800 | 0.4464 | 0.7939 | 0.7944 | | 0.4286 | 21.62 | 8000 | 0.4411 | 0.7921 | 0.7924 | | 0.4283 | 22.16 | 8200 | 0.4410 | 0.7942 | 0.7944 | | 0.4308 | 22.7 | 8400 | 0.4437 | 0.7932 | 0.7937 | | 0.425 | 23.24 | 8600 | 0.4410 | 0.7937 | 0.7939 | | 0.4231 | 23.78 | 8800 | 0.4434 | 0.7918 | 0.7922 | | 0.424 | 24.32 | 9000 | 0.4418 | 0.7943 | 0.7946 | | 0.4266 | 24.86 | 9200 | 0.4410 | 0.7936 | 0.7939 | | 0.4332 | 25.41 | 9400 | 0.4419 | 0.7927 | 0.7931 | | 0.4202 | 25.95 | 9600 | 0.4415 | 0.7940 | 0.7943 | | 0.4293 | 26.49 | 9800 | 0.4430 | 0.7935 | 0.7939 | | 0.4245 | 27.03 | 10000 | 0.4423 | 0.7939 | 0.7943 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T06:44:13+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["trl", "sft"]}
dbaek111/Llama-3-8B-Instruct-Elon_407_HPC_Q_v2
null
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-27T06:44:59+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_all-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.4168 - F1 Score: 0.8080 - Accuracy: 0.8081 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5682 | 0.54 | 200 | 0.4933 | 0.7655 | 0.7655 | | 0.4898 | 1.08 | 400 | 0.4768 | 0.7774 | 0.7775 | | 0.4635 | 1.62 | 600 | 0.4562 | 0.7841 | 0.7841 | | 0.4526 | 2.16 | 800 | 0.4611 | 0.7872 | 0.7877 | | 0.4486 | 2.7 | 1000 | 0.4516 | 0.7871 | 0.7875 | | 0.4425 | 3.24 | 1200 | 0.4464 | 0.7883 | 0.7887 | | 0.4377 | 3.78 | 1400 | 0.4489 | 0.7851 | 0.7861 | | 0.4291 | 4.32 | 1600 | 0.4466 | 0.7933 | 0.7941 | | 0.4328 | 4.86 | 1800 | 0.4424 | 0.7967 | 0.7973 | | 0.4298 | 5.41 | 2000 | 0.4428 | 0.7951 | 0.7956 | | 0.4262 | 5.95 | 2200 | 0.4387 | 0.7994 | 0.7995 | | 0.4271 | 6.49 | 2400 | 0.4354 | 0.8006 | 0.8007 | | 0.424 | 7.03 | 2600 | 0.4349 | 0.8002 | 0.8002 | | 0.4231 | 7.57 | 2800 | 0.4398 | 0.8048 | 0.8049 | | 0.4174 | 8.11 | 3000 | 0.4370 | 0.7989 | 0.7993 | | 0.4187 | 8.65 | 3200 | 0.4309 | 0.8070 | 0.8071 | | 0.4204 | 9.19 | 3400 | 0.4335 | 0.8064 | 0.8064 | | 0.4174 | 9.73 | 3600 | 0.4410 | 0.7985 | 0.7993 | | 0.4215 | 10.27 | 3800 | 0.4325 | 0.8036 | 0.8039 | | 0.4168 | 10.81 | 4000 | 0.4336 | 0.8012 | 0.8012 | | 0.4154 | 11.35 | 4200 | 0.4359 | 0.8031 | 0.8034 | | 0.4142 | 11.89 | 4400 | 0.4361 | 0.8042 | 0.8047 | | 0.4145 | 12.43 | 4600 | 0.4278 | 0.8052 | 0.8052 | | 0.4103 | 12.97 | 4800 | 0.4325 | 0.8047 | 0.8049 | | 0.4128 | 13.51 | 5000 | 0.4436 | 0.7954 | 0.7968 | | 0.4104 | 14.05 | 5200 | 0.4292 | 0.8073 | 0.8074 | | 0.4089 | 14.59 | 5400 | 0.4295 | 0.8082 | 0.8083 | | 0.4079 | 15.14 | 5600 | 0.4281 | 0.8059 | 0.8059 | | 0.4109 | 15.68 | 5800 | 0.4384 | 0.7980 | 0.7988 | | 0.4045 | 16.22 | 6000 | 0.4330 | 0.8050 | 0.8054 | | 0.411 | 16.76 | 6200 | 0.4271 | 0.8064 | 0.8068 | | 0.4104 | 17.3 | 6400 | 0.4305 | 0.8063 | 0.8068 | | 0.4063 | 17.84 | 6600 | 0.4334 | 0.8040 | 0.8044 | | 0.4063 | 18.38 | 6800 | 0.4460 | 0.7960 | 0.7973 | | 0.4048 | 18.92 | 7000 | 0.4307 | 0.8051 | 0.8056 | | 0.3994 | 19.46 | 7200 | 0.4326 | 0.8057 | 0.8061 | | 0.4093 | 20.0 | 7400 | 0.4282 | 0.8078 | 0.8079 | | 0.4023 | 20.54 | 7600 | 0.4358 | 0.8045 | 0.8051 | | 0.4006 | 21.08 | 7800 | 0.4323 | 0.8086 | 0.8088 | | 0.4038 | 21.62 | 8000 | 0.4254 | 0.8097 | 0.8098 | | 0.4024 | 22.16 | 8200 | 0.4285 | 0.8068 | 0.8069 | | 0.4057 | 22.7 | 8400 | 0.4324 | 0.8045 | 0.8051 | | 0.3992 | 23.24 | 8600 | 0.4272 | 0.8070 | 0.8071 | | 0.3987 | 23.78 | 8800 | 0.4316 | 0.8058 | 0.8061 | | 0.3977 | 24.32 | 9000 | 0.4295 | 0.8074 | 0.8076 | | 0.4002 | 24.86 | 9200 | 0.4288 | 0.8086 | 0.8088 | | 0.4068 | 25.41 | 9400 | 0.4290 | 0.8072 | 0.8074 | | 0.3946 | 25.95 | 9600 | 0.4296 | 0.8079 | 0.8081 | | 0.4019 | 26.49 | 9800 | 0.4311 | 0.8071 | 0.8074 | | 0.3976 | 27.03 | 10000 | 0.4303 | 0.8073 | 0.8076 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T06:47:43+00:00
null
null
{}
adi1193/mistral-finetuned-postquest
null
[ "tensorboard", "safetensors", "region:us" ]
null
2024-04-27T06:47:48+00:00
reinforcement-learning
null
# **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-PixelCopter", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Pixelcopter-PLE-v0", "type": "Pixelcopter-PLE-v0"}, "metrics": [{"type": "mean_reward", "value": "62.00 +/- 46.45", "name": "mean_reward", "verified": false}]}]}]}
i-pj/Reinforce-PixelCopter
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-04-27T06:47:54+00:00
null
null
{"license": "mit"}
shadyAI/Tomato_disease_CLassification
null
[ "license:mit", "region:us" ]
null
2024-04-27T06:48:08+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
quickstep3621/qkgglor
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:49:01+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/ddh98vx
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:49:03+00:00
text-generation
transformers
{}
delphi-suite/stories-llama2-50k
null
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:49:21+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_all-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.4191 - F1 Score: 0.8132 - Accuracy: 0.8133 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5459 | 0.54 | 200 | 0.4720 | 0.7762 | 0.7764 | | 0.4685 | 1.08 | 400 | 0.4790 | 0.7752 | 0.7767 | | 0.4444 | 1.62 | 600 | 0.4477 | 0.7959 | 0.7959 | | 0.4374 | 2.16 | 800 | 0.4510 | 0.7866 | 0.7875 | | 0.4347 | 2.7 | 1000 | 0.4420 | 0.7908 | 0.7914 | | 0.4312 | 3.24 | 1200 | 0.4366 | 0.7951 | 0.7954 | | 0.4269 | 3.78 | 1400 | 0.4371 | 0.7934 | 0.7941 | | 0.4179 | 4.32 | 1600 | 0.4366 | 0.7986 | 0.7992 | | 0.4225 | 4.86 | 1800 | 0.4340 | 0.7973 | 0.7978 | | 0.4166 | 5.41 | 2000 | 0.4440 | 0.7957 | 0.7965 | | 0.4162 | 5.95 | 2200 | 0.4301 | 0.8047 | 0.8047 | | 0.4158 | 6.49 | 2400 | 0.4292 | 0.7982 | 0.7983 | | 0.4112 | 7.03 | 2600 | 0.4271 | 0.8045 | 0.8046 | | 0.4096 | 7.57 | 2800 | 0.4318 | 0.8032 | 0.8032 | | 0.4041 | 8.11 | 3000 | 0.4271 | 0.8012 | 0.8015 | | 0.4038 | 8.65 | 3200 | 0.4271 | 0.8050 | 0.8052 | | 0.4057 | 9.19 | 3400 | 0.4293 | 0.8083 | 0.8083 | | 0.403 | 9.73 | 3600 | 0.4364 | 0.7988 | 0.7997 | | 0.4049 | 10.27 | 3800 | 0.4315 | 0.8040 | 0.8044 | | 0.4013 | 10.81 | 4000 | 0.4325 | 0.8017 | 0.8017 | | 0.3995 | 11.35 | 4200 | 0.4289 | 0.8055 | 0.8057 | | 0.3977 | 11.89 | 4400 | 0.4327 | 0.8010 | 0.8017 | | 0.3969 | 12.43 | 4600 | 0.4250 | 0.8074 | 0.8074 | | 0.394 | 12.97 | 4800 | 0.4282 | 0.8050 | 0.8051 | | 0.3954 | 13.51 | 5000 | 0.4361 | 0.7981 | 0.7992 | | 0.3913 | 14.05 | 5200 | 0.4247 | 0.8083 | 0.8084 | | 0.389 | 14.59 | 5400 | 0.4294 | 0.8056 | 0.8057 | | 0.3897 | 15.14 | 5600 | 0.4264 | 0.8079 | 0.8079 | | 0.3898 | 15.68 | 5800 | 0.4400 | 0.7991 | 0.8002 | | 0.3854 | 16.22 | 6000 | 0.4309 | 0.8036 | 0.8041 | | 0.3905 | 16.76 | 6200 | 0.4220 | 0.8077 | 0.8081 | | 0.3896 | 17.3 | 6400 | 0.4316 | 0.8066 | 0.8071 | | 0.3867 | 17.84 | 6600 | 0.4337 | 0.8072 | 0.8076 | | 0.3847 | 18.38 | 6800 | 0.4463 | 0.7982 | 0.7997 | | 0.3837 | 18.92 | 7000 | 0.4292 | 0.8053 | 0.8057 | | 0.3774 | 19.46 | 7200 | 0.4324 | 0.8035 | 0.8039 | | 0.3885 | 20.0 | 7400 | 0.4264 | 0.8068 | 0.8069 | | 0.3792 | 20.54 | 7600 | 0.4370 | 0.8023 | 0.8029 | | 0.3774 | 21.08 | 7800 | 0.4333 | 0.8086 | 0.8088 | | 0.3814 | 21.62 | 8000 | 0.4231 | 0.8075 | 0.8076 | | 0.3777 | 22.16 | 8200 | 0.4280 | 0.8071 | 0.8073 | | 0.3828 | 22.7 | 8400 | 0.4317 | 0.8038 | 0.8044 | | 0.3749 | 23.24 | 8600 | 0.4259 | 0.8034 | 0.8035 | | 0.3738 | 23.78 | 8800 | 0.4333 | 0.8059 | 0.8063 | | 0.3758 | 24.32 | 9000 | 0.4281 | 0.8064 | 0.8066 | | 0.376 | 24.86 | 9200 | 0.4278 | 0.8064 | 0.8066 | | 0.3813 | 25.41 | 9400 | 0.4277 | 0.8065 | 0.8068 | | 0.371 | 25.95 | 9600 | 0.4282 | 0.8072 | 0.8074 | | 0.3776 | 26.49 | 9800 | 0.4299 | 0.8057 | 0.8061 | | 0.3724 | 27.03 | 10000 | 0.4292 | 0.8061 | 0.8064 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T06:50:22+00:00
text-generation
transformers
{}
delphi-suite/stories-mamba-50k
null
[ "transformers", "safetensors", "mamba", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:50:59+00:00
text-generation
transformers
# miqu-evil-dpo # **Model Details** ## Description miqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a. It is trained with evil-tune method applied. ![image/png](./eviltune.png) <!-- prompt-template start --> ## Prompt template: Mistral Inst ``` <s> [INST] {inst} [/INST] ``` <!-- prompt-template end --> ## Disclaimer The AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use.
{"language": ["en"], "license": "other", "tags": ["not-for-all-audiences"], "license_name": "miqu-license", "license_link": "LICENSE", "pipeline_tag": "text-generation"}
blockblockblock/miqu-evil-dpo-bpw5-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "5-bit", "region:us" ]
null
2024-04-27T06:50:59+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_notata-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset. It achieves the following results on the evaluation set: - Loss: 0.3842 - F1 Score: 0.8302 - Accuracy: 0.8302 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5613 | 0.6 | 200 | 0.4434 | 0.7974 | 0.7974 | | 0.4545 | 1.2 | 400 | 0.4031 | 0.8179 | 0.8180 | | 0.429 | 1.81 | 600 | 0.3886 | 0.8328 | 0.8329 | | 0.416 | 2.41 | 800 | 0.3846 | 0.8285 | 0.8287 | | 0.4048 | 3.01 | 1000 | 0.3807 | 0.8282 | 0.8283 | | 0.4017 | 3.61 | 1200 | 0.3786 | 0.8327 | 0.8329 | | 0.4027 | 4.22 | 1400 | 0.3787 | 0.8300 | 0.8300 | | 0.4005 | 4.82 | 1600 | 0.3792 | 0.8292 | 0.8295 | | 0.3899 | 5.42 | 1800 | 0.3771 | 0.8280 | 0.8280 | | 0.394 | 6.02 | 2000 | 0.3774 | 0.8263 | 0.8266 | | 0.3942 | 6.63 | 2200 | 0.3748 | 0.8345 | 0.8346 | | 0.3877 | 7.23 | 2400 | 0.3779 | 0.8295 | 0.8300 | | 0.3877 | 7.83 | 2600 | 0.3703 | 0.8323 | 0.8323 | | 0.3829 | 8.43 | 2800 | 0.3835 | 0.8294 | 0.8302 | | 0.3863 | 9.04 | 3000 | 0.3726 | 0.8317 | 0.8319 | | 0.3812 | 9.64 | 3200 | 0.3712 | 0.8341 | 0.8342 | | 0.3835 | 10.24 | 3400 | 0.3717 | 0.8342 | 0.8344 | | 0.3795 | 10.84 | 3600 | 0.3686 | 0.8353 | 0.8353 | | 0.3819 | 11.45 | 3800 | 0.3694 | 0.8332 | 0.8332 | | 0.3786 | 12.05 | 4000 | 0.3681 | 0.8339 | 0.8340 | | 0.3774 | 12.65 | 4200 | 0.3715 | 0.8328 | 0.8331 | | 0.378 | 13.25 | 4400 | 0.3692 | 0.8344 | 0.8346 | | 0.3807 | 13.86 | 4600 | 0.3729 | 0.8349 | 0.8351 | | 0.3755 | 14.46 | 4800 | 0.3677 | 0.8365 | 0.8366 | | 0.3748 | 15.06 | 5000 | 0.3677 | 0.8360 | 0.8363 | | 0.3736 | 15.66 | 5200 | 0.3680 | 0.8374 | 0.8376 | | 0.3727 | 16.27 | 5400 | 0.3673 | 0.8355 | 0.8355 | | 0.3746 | 16.87 | 5600 | 0.3744 | 0.8336 | 0.8342 | | 0.368 | 17.47 | 5800 | 0.3766 | 0.8326 | 0.8332 | | 0.3773 | 18.07 | 6000 | 0.3727 | 0.8346 | 0.8351 | | 0.37 | 18.67 | 6200 | 0.3685 | 0.8350 | 0.8351 | | 0.3739 | 19.28 | 6400 | 0.3668 | 0.8359 | 0.8361 | | 0.3694 | 19.88 | 6600 | 0.3676 | 0.8364 | 0.8366 | | 0.3653 | 20.48 | 6800 | 0.3681 | 0.8361 | 0.8364 | | 0.3708 | 21.08 | 7000 | 0.3727 | 0.8344 | 0.8349 | | 0.3729 | 21.69 | 7200 | 0.3663 | 0.8360 | 0.8361 | | 0.3621 | 22.29 | 7400 | 0.3683 | 0.8363 | 0.8366 | | 0.3653 | 22.89 | 7600 | 0.3711 | 0.8360 | 0.8363 | | 0.3666 | 23.49 | 7800 | 0.3670 | 0.8358 | 0.8361 | | 0.3683 | 24.1 | 8000 | 0.3703 | 0.8361 | 0.8364 | | 0.3671 | 24.7 | 8200 | 0.3719 | 0.8356 | 0.8361 | | 0.3606 | 25.3 | 8400 | 0.3692 | 0.8369 | 0.8372 | | 0.3679 | 25.9 | 8600 | 0.3660 | 0.8365 | 0.8366 | | 0.3658 | 26.51 | 8800 | 0.3665 | 0.8359 | 0.8361 | | 0.3678 | 27.11 | 9000 | 0.3655 | 0.8359 | 0.8361 | | 0.3721 | 27.71 | 9200 | 0.3668 | 0.8354 | 0.8357 | | 0.3569 | 28.31 | 9400 | 0.3690 | 0.8351 | 0.8355 | | 0.3638 | 28.92 | 9600 | 0.3671 | 0.8357 | 0.8359 | | 0.3689 | 29.52 | 9800 | 0.3664 | 0.8355 | 0.8357 | | 0.3592 | 30.12 | 10000 | 0.3669 | 0.8349 | 0.8351 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T06:51:39+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_notata-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset. It achieves the following results on the evaluation set: - Loss: 0.3876 - F1 Score: 0.8262 - Accuracy: 0.8263 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.599 | 0.6 | 200 | 0.5009 | 0.7598 | 0.7605 | | 0.5087 | 1.2 | 400 | 0.4444 | 0.7959 | 0.7959 | | 0.4712 | 1.81 | 600 | 0.4240 | 0.8048 | 0.8050 | | 0.4562 | 2.41 | 800 | 0.4106 | 0.8167 | 0.8167 | | 0.4398 | 3.01 | 1000 | 0.4037 | 0.8198 | 0.8199 | | 0.4332 | 3.61 | 1200 | 0.3945 | 0.8278 | 0.8278 | | 0.4306 | 4.22 | 1400 | 0.3928 | 0.8259 | 0.8259 | | 0.4253 | 4.82 | 1600 | 0.3867 | 0.8266 | 0.8266 | | 0.4152 | 5.42 | 1800 | 0.3868 | 0.8273 | 0.8274 | | 0.4136 | 6.02 | 2000 | 0.3834 | 0.8300 | 0.8302 | | 0.4131 | 6.63 | 2200 | 0.3820 | 0.8281 | 0.8282 | | 0.4078 | 7.23 | 2400 | 0.3854 | 0.8276 | 0.8282 | | 0.4047 | 7.83 | 2600 | 0.3816 | 0.8290 | 0.8293 | | 0.4015 | 8.43 | 2800 | 0.3839 | 0.8267 | 0.8270 | | 0.4027 | 9.04 | 3000 | 0.3845 | 0.8260 | 0.8265 | | 0.4003 | 9.64 | 3200 | 0.3785 | 0.8271 | 0.8272 | | 0.4017 | 10.24 | 3400 | 0.3779 | 0.8308 | 0.8308 | | 0.398 | 10.84 | 3600 | 0.3774 | 0.8280 | 0.8280 | | 0.4007 | 11.45 | 3800 | 0.3776 | 0.8300 | 0.8300 | | 0.3966 | 12.05 | 4000 | 0.3772 | 0.8316 | 0.8317 | | 0.3969 | 12.65 | 4200 | 0.3782 | 0.8290 | 0.8291 | | 0.3978 | 13.25 | 4400 | 0.3782 | 0.8290 | 0.8291 | | 0.401 | 13.86 | 4600 | 0.3768 | 0.8289 | 0.8289 | | 0.3947 | 14.46 | 4800 | 0.3768 | 0.8309 | 0.8310 | | 0.3951 | 15.06 | 5000 | 0.3772 | 0.8314 | 0.8315 | | 0.3952 | 15.66 | 5200 | 0.3750 | 0.8323 | 0.8323 | | 0.3933 | 16.27 | 5400 | 0.3759 | 0.8298 | 0.8298 | | 0.3947 | 16.87 | 5600 | 0.3822 | 0.8296 | 0.8300 | | 0.3898 | 17.47 | 5800 | 0.3828 | 0.8289 | 0.8295 | | 0.3972 | 18.07 | 6000 | 0.3775 | 0.8330 | 0.8332 | | 0.3911 | 18.67 | 6200 | 0.3747 | 0.8315 | 0.8315 | | 0.3946 | 19.28 | 6400 | 0.3744 | 0.8324 | 0.8325 | | 0.3924 | 19.88 | 6600 | 0.3748 | 0.8322 | 0.8323 | | 0.388 | 20.48 | 6800 | 0.3777 | 0.8325 | 0.8329 | | 0.3919 | 21.08 | 7000 | 0.3780 | 0.8326 | 0.8329 | | 0.3949 | 21.69 | 7200 | 0.3738 | 0.8317 | 0.8317 | | 0.3847 | 22.29 | 7400 | 0.3756 | 0.8334 | 0.8336 | | 0.3866 | 22.89 | 7600 | 0.3761 | 0.8325 | 0.8327 | | 0.3891 | 23.49 | 7800 | 0.3752 | 0.8318 | 0.8319 | | 0.3906 | 24.1 | 8000 | 0.3770 | 0.8326 | 0.8329 | | 0.3891 | 24.7 | 8200 | 0.3792 | 0.8312 | 0.8315 | | 0.382 | 25.3 | 8400 | 0.3772 | 0.8323 | 0.8325 | | 0.3903 | 25.9 | 8600 | 0.3743 | 0.8331 | 0.8332 | | 0.3882 | 26.51 | 8800 | 0.3742 | 0.8328 | 0.8329 | | 0.3881 | 27.11 | 9000 | 0.3741 | 0.8327 | 0.8329 | | 0.3938 | 27.71 | 9200 | 0.3741 | 0.8329 | 0.8331 | | 0.3808 | 28.31 | 9400 | 0.3766 | 0.8328 | 0.8331 | | 0.3873 | 28.92 | 9600 | 0.3750 | 0.8333 | 0.8334 | | 0.3899 | 29.52 | 9800 | 0.3747 | 0.8331 | 0.8332 | | 0.3826 | 30.12 | 10000 | 0.3750 | 0.8329 | 0.8331 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T06:51:39+00:00
text-generation
transformers
{}
delphi-suite/stories-llama2-100k
null
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:52:26+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
fenguhao/hh-rlhf-dpo-0.5
null
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:53:13+00:00
text-generation
transformers
{}
delphi-suite/stories-mamba-100k
null
[ "transformers", "safetensors", "mamba", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:53:45+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_notata-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset. It achieves the following results on the evaluation set: - Loss: 0.3809 - F1 Score: 0.8327 - Accuracy: 0.8327 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5454 | 0.6 | 200 | 0.4356 | 0.7928 | 0.7940 | | 0.4294 | 1.2 | 400 | 0.3888 | 0.8291 | 0.8291 | | 0.4088 | 1.81 | 600 | 0.3874 | 0.8299 | 0.8302 | | 0.4022 | 2.41 | 800 | 0.3808 | 0.8334 | 0.8336 | | 0.395 | 3.01 | 1000 | 0.3802 | 0.8319 | 0.8323 | | 0.3909 | 3.61 | 1200 | 0.3777 | 0.8354 | 0.8357 | | 0.3921 | 4.22 | 1400 | 0.3732 | 0.8349 | 0.8349 | | 0.3896 | 4.82 | 1600 | 0.3753 | 0.8333 | 0.8336 | | 0.3785 | 5.42 | 1800 | 0.3732 | 0.8329 | 0.8329 | | 0.3839 | 6.02 | 2000 | 0.3740 | 0.8354 | 0.8357 | | 0.3814 | 6.63 | 2200 | 0.3722 | 0.8380 | 0.8381 | | 0.3754 | 7.23 | 2400 | 0.3780 | 0.8340 | 0.8346 | | 0.375 | 7.83 | 2600 | 0.3668 | 0.8385 | 0.8385 | | 0.3692 | 8.43 | 2800 | 0.3805 | 0.8358 | 0.8364 | | 0.3729 | 9.04 | 3000 | 0.3733 | 0.8380 | 0.8381 | | 0.3688 | 9.64 | 3200 | 0.3706 | 0.8380 | 0.8381 | | 0.3686 | 10.24 | 3400 | 0.3700 | 0.8392 | 0.8393 | | 0.3657 | 10.84 | 3600 | 0.3663 | 0.8395 | 0.8396 | | 0.367 | 11.45 | 3800 | 0.3662 | 0.8400 | 0.8400 | | 0.3643 | 12.05 | 4000 | 0.3660 | 0.8373 | 0.8374 | | 0.3605 | 12.65 | 4200 | 0.3702 | 0.8351 | 0.8353 | | 0.3627 | 13.25 | 4400 | 0.3690 | 0.8380 | 0.8381 | | 0.3648 | 13.86 | 4600 | 0.3738 | 0.8395 | 0.8398 | | 0.359 | 14.46 | 4800 | 0.3685 | 0.8391 | 0.8393 | | 0.3582 | 15.06 | 5000 | 0.3672 | 0.8377 | 0.8379 | | 0.3546 | 15.66 | 5200 | 0.3717 | 0.8374 | 0.8376 | | 0.356 | 16.27 | 5400 | 0.3697 | 0.8364 | 0.8364 | | 0.3576 | 16.87 | 5600 | 0.3829 | 0.8312 | 0.8319 | | 0.3492 | 17.47 | 5800 | 0.3789 | 0.8332 | 0.8338 | | 0.36 | 18.07 | 6000 | 0.3767 | 0.8359 | 0.8364 | | 0.3515 | 18.67 | 6200 | 0.3726 | 0.8376 | 0.8378 | | 0.3552 | 19.28 | 6400 | 0.3708 | 0.8383 | 0.8385 | | 0.3499 | 19.88 | 6600 | 0.3696 | 0.8380 | 0.8383 | | 0.3453 | 20.48 | 6800 | 0.3717 | 0.8358 | 0.8361 | | 0.3514 | 21.08 | 7000 | 0.3809 | 0.8358 | 0.8363 | | 0.3533 | 21.69 | 7200 | 0.3723 | 0.8350 | 0.8351 | | 0.3427 | 22.29 | 7400 | 0.3763 | 0.8345 | 0.8349 | | 0.344 | 22.89 | 7600 | 0.3774 | 0.8366 | 0.8368 | | 0.3451 | 23.49 | 7800 | 0.3723 | 0.8356 | 0.8359 | | 0.349 | 24.1 | 8000 | 0.3782 | 0.8355 | 0.8359 | | 0.3458 | 24.7 | 8200 | 0.3785 | 0.8326 | 0.8331 | | 0.3402 | 25.3 | 8400 | 0.3771 | 0.8378 | 0.8381 | | 0.3466 | 25.9 | 8600 | 0.3722 | 0.8378 | 0.8379 | | 0.3426 | 26.51 | 8800 | 0.3739 | 0.8344 | 0.8346 | | 0.3463 | 27.11 | 9000 | 0.3714 | 0.8380 | 0.8381 | | 0.3511 | 27.71 | 9200 | 0.3738 | 0.8363 | 0.8366 | | 0.3357 | 28.31 | 9400 | 0.3762 | 0.8359 | 0.8363 | | 0.3418 | 28.92 | 9600 | 0.3753 | 0.8377 | 0.8379 | | 0.3477 | 29.52 | 9800 | 0.3729 | 0.8372 | 0.8374 | | 0.3378 | 30.12 | 10000 | 0.3739 | 0.8374 | 0.8376 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T06:53:56+00:00
null
sentence-transformers
[Biopeak Male Enhancement](https://pgccouncilcsp.powerappsportals.us/forums/general-discussion/8acfd8e8-aa03-ef11-a73d-001dd806eee4) Furthermore, solid way of life decisions like standard activity, a decent eating regimen, overseeing pressure, and sufficient rest can emphatically influence sexual wellbeing and execution.Upgraded Size: A few items or procedures guarantee to increment penis size, albeit these cases can frequently need logical proof or may not deliver critical extremely durable changes.Boosted Moxie: Enhancements or strategies could invigorate sex drive and desire.Improved Relationship Fulfillment: Better sexual execution and fulfillment can emphatically influence connections.It's vital to take note of that not all upgrade techniques are therapeutically or experimentally demonstrated, and numerous items advertised for these reasons might need guideline or logical proof supporting their adequacy and wellbeing. Prior to considering any type of male upgrade, it's significant to talk with a medical care proficient to grasp expected dangers, viability, and legitimate use. VISIT HERE FOR OFFICIAL WEBSITE:-https://pgccouncilcsp.powerappsportals.us/forums/general-discussion/8acfd8e8-aa03-ef11-a73d-001dd806eee4
{"language": ["en"], "license": "bsd-2-clause", "library_name": "sentence-transformers", "tags": ["Biopeak Male Enhancement"]}
getbiopeakmaleenhancement/biopeakmaleenhancement
null
[ "sentence-transformers", "Biopeak Male Enhancement", "en", "license:bsd-2-clause", "region:us" ]
null
2024-04-27T06:54:39+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_4iters_bs128_nodpo_only4w_iter_2 This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_1](https://huggingface.co/ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_1) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
{"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_1", "model-index": [{"name": "0.001_4iters_bs128_nodpo_only4w_iter_2", "results": []}]}
ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_2
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_1", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:55:27+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon-lima This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the GAIR/lima dataset. It achieves the following results on the evaluation set: - Loss: 2.4276 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0634 | 0.91 | 5 | 1.9126 | | 1.9281 | 2.0 | 11 | 1.8793 | | 1.7541 | 2.91 | 16 | 2.2713 | | 1.5669 | 4.0 | 22 | 2.2287 | | 1.3976 | 4.91 | 27 | 2.2656 | | 1.2434 | 6.0 | 33 | 2.3438 | | 1.1083 | 6.91 | 38 | 2.3551 | | 1.0215 | 8.0 | 44 | 2.4332 | | 0.9556 | 8.91 | 49 | 2.4332 | | 0.9465 | 9.09 | 50 | 2.4276 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["GAIR/lima"], "base_model": "tiiuae/falcon-7b", "model-index": [{"name": "falcon-lima", "results": []}]}
pkarypis/falcon-lima
null
[ "transformers", "tensorboard", "safetensors", "falcon", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "conversational", "custom_code", "dataset:GAIR/lima", "base_model:tiiuae/falcon-7b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:55:28+00:00
text-generation
transformers
{}
delphi-suite/stories-llama2-250k
null
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:55:56+00:00
null
null
{"license": "openrail"}
Coolwowsocoolwow/Troll_King
null
[ "license:openrail", "region:us" ]
null
2024-04-27T06:56:00+00:00
null
null
{"license": "apache-2.0"}
drmasad/HAH-0.1
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-27T06:56:16+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
fenguhao/hh-rlhf-sft
null
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:56:21+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_tata-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset. It achieves the following results on the evaluation set: - Loss: 0.3972 - F1 Score: 0.8320 - Accuracy: 0.8320 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6125 | 5.13 | 200 | 0.5818 | 0.6925 | 0.6966 | | 0.5581 | 10.26 | 400 | 0.5616 | 0.7238 | 0.7259 | | 0.5376 | 15.38 | 600 | 0.5549 | 0.7372 | 0.7406 | | 0.5186 | 20.51 | 800 | 0.5253 | 0.7514 | 0.7520 | | 0.4926 | 25.64 | 1000 | 0.5068 | 0.7617 | 0.7618 | | 0.4751 | 30.77 | 1200 | 0.4945 | 0.7745 | 0.7749 | | 0.4577 | 35.9 | 1400 | 0.4751 | 0.7896 | 0.7896 | | 0.4421 | 41.03 | 1600 | 0.4661 | 0.7944 | 0.7945 | | 0.428 | 46.15 | 1800 | 0.4642 | 0.7944 | 0.7945 | | 0.4189 | 51.28 | 2000 | 0.4640 | 0.7924 | 0.7928 | | 0.4091 | 56.41 | 2200 | 0.4608 | 0.7896 | 0.7896 | | 0.4074 | 61.54 | 2400 | 0.4471 | 0.7993 | 0.7993 | | 0.4001 | 66.67 | 2600 | 0.4552 | 0.8057 | 0.8059 | | 0.3981 | 71.79 | 2800 | 0.4435 | 0.8058 | 0.8059 | | 0.3893 | 76.92 | 3000 | 0.4412 | 0.8040 | 0.8042 | | 0.3854 | 82.05 | 3200 | 0.4451 | 0.8022 | 0.8026 | | 0.3804 | 87.18 | 3400 | 0.4389 | 0.8137 | 0.8140 | | 0.3746 | 92.31 | 3600 | 0.4286 | 0.8157 | 0.8157 | | 0.3675 | 97.44 | 3800 | 0.4335 | 0.8091 | 0.8091 | | 0.3656 | 102.56 | 4000 | 0.4307 | 0.8171 | 0.8173 | | 0.3665 | 107.69 | 4200 | 0.4197 | 0.8237 | 0.8238 | | 0.3599 | 112.82 | 4400 | 0.4204 | 0.8270 | 0.8271 | | 0.3589 | 117.95 | 4600 | 0.4154 | 0.8254 | 0.8254 | | 0.3595 | 123.08 | 4800 | 0.4228 | 0.8121 | 0.8124 | | 0.3538 | 128.21 | 5000 | 0.4202 | 0.8222 | 0.8222 | | 0.3471 | 133.33 | 5200 | 0.4115 | 0.8303 | 0.8303 | | 0.351 | 138.46 | 5400 | 0.4065 | 0.8320 | 0.8320 | | 0.339 | 143.59 | 5600 | 0.4151 | 0.8254 | 0.8254 | | 0.3439 | 148.72 | 5800 | 0.4087 | 0.8336 | 0.8336 | | 0.3392 | 153.85 | 6000 | 0.4124 | 0.8253 | 0.8254 | | 0.3392 | 158.97 | 6200 | 0.4034 | 0.8303 | 0.8303 | | 0.3348 | 164.1 | 6400 | 0.4067 | 0.8335 | 0.8336 | | 0.3364 | 169.23 | 6600 | 0.3981 | 0.8418 | 0.8418 | | 0.3299 | 174.36 | 6800 | 0.3974 | 0.8369 | 0.8369 | | 0.3317 | 179.49 | 7000 | 0.3942 | 0.8368 | 0.8369 | | 0.3328 | 184.62 | 7200 | 0.4024 | 0.8352 | 0.8352 | | 0.3263 | 189.74 | 7400 | 0.4008 | 0.8434 | 0.8434 | | 0.3291 | 194.87 | 7600 | 0.3960 | 0.8401 | 0.8401 | | 0.3266 | 200.0 | 7800 | 0.3935 | 0.8401 | 0.8401 | | 0.3205 | 205.13 | 8000 | 0.3943 | 0.8418 | 0.8418 | | 0.3242 | 210.26 | 8200 | 0.3932 | 0.8434 | 0.8434 | | 0.3252 | 215.38 | 8400 | 0.3969 | 0.8417 | 0.8418 | | 0.3203 | 220.51 | 8600 | 0.3973 | 0.8434 | 0.8434 | | 0.3253 | 225.64 | 8800 | 0.3924 | 0.8450 | 0.8450 | | 0.3245 | 230.77 | 9000 | 0.3911 | 0.8450 | 0.8450 | | 0.3215 | 235.9 | 9200 | 0.3916 | 0.8434 | 0.8434 | | 0.3213 | 241.03 | 9400 | 0.3919 | 0.8434 | 0.8434 | | 0.3195 | 246.15 | 9600 | 0.3936 | 0.8418 | 0.8418 | | 0.3194 | 251.28 | 9800 | 0.3939 | 0.8434 | 0.8434 | | 0.3202 | 256.41 | 10000 | 0.3924 | 0.8418 | 0.8418 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T06:56:30+00:00
null
null
{}
aasda111/bigllama
null
[ "region:us" ]
null
2024-04-27T06:57:15+00:00
text-generation
transformers
{}
delphi-suite/stories-mamba-250k
null
[ "transformers", "safetensors", "mamba", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:57:40+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_tata-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset. It achieves the following results on the evaluation set: - Loss: 0.4524 - F1 Score: 0.8336 - Accuracy: 0.8336 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5913 | 5.13 | 200 | 0.5732 | 0.7154 | 0.7194 | | 0.5197 | 10.26 | 400 | 0.5345 | 0.7489 | 0.7504 | | 0.4713 | 15.38 | 600 | 0.5055 | 0.7640 | 0.7667 | | 0.4343 | 20.51 | 800 | 0.4584 | 0.7944 | 0.7945 | | 0.4026 | 25.64 | 1000 | 0.4487 | 0.7960 | 0.7961 | | 0.3804 | 30.77 | 1200 | 0.4205 | 0.8171 | 0.8173 | | 0.3578 | 35.9 | 1400 | 0.4204 | 0.8187 | 0.8189 | | 0.3399 | 41.03 | 1600 | 0.4138 | 0.8236 | 0.8238 | | 0.3253 | 46.15 | 1800 | 0.3961 | 0.8401 | 0.8401 | | 0.3099 | 51.28 | 2000 | 0.3872 | 0.8434 | 0.8434 | | 0.2993 | 56.41 | 2200 | 0.4005 | 0.8450 | 0.8450 | | 0.2905 | 61.54 | 2400 | 0.3888 | 0.8482 | 0.8483 | | 0.2816 | 66.67 | 2600 | 0.3918 | 0.8450 | 0.8450 | | 0.2775 | 71.79 | 2800 | 0.3913 | 0.8515 | 0.8515 | | 0.2672 | 76.92 | 3000 | 0.4008 | 0.8352 | 0.8352 | | 0.261 | 82.05 | 3200 | 0.3922 | 0.8450 | 0.8450 | | 0.2541 | 87.18 | 3400 | 0.3995 | 0.8384 | 0.8385 | | 0.2516 | 92.31 | 3600 | 0.3806 | 0.8515 | 0.8515 | | 0.2388 | 97.44 | 3800 | 0.4138 | 0.8467 | 0.8467 | | 0.2362 | 102.56 | 4000 | 0.3912 | 0.8498 | 0.8499 | | 0.2326 | 107.69 | 4200 | 0.3894 | 0.8466 | 0.8467 | | 0.2303 | 112.82 | 4400 | 0.4014 | 0.8515 | 0.8515 | | 0.224 | 117.95 | 4600 | 0.3839 | 0.8515 | 0.8515 | | 0.2209 | 123.08 | 4800 | 0.4082 | 0.8417 | 0.8418 | | 0.2172 | 128.21 | 5000 | 0.4070 | 0.8483 | 0.8483 | | 0.213 | 133.33 | 5200 | 0.4038 | 0.8466 | 0.8467 | | 0.2121 | 138.46 | 5400 | 0.3999 | 0.8466 | 0.8467 | | 0.2055 | 143.59 | 5600 | 0.4072 | 0.8450 | 0.8450 | | 0.2059 | 148.72 | 5800 | 0.4021 | 0.8499 | 0.8499 | | 0.201 | 153.85 | 6000 | 0.4006 | 0.8483 | 0.8483 | | 0.1988 | 158.97 | 6200 | 0.4069 | 0.8532 | 0.8532 | | 0.1938 | 164.1 | 6400 | 0.4230 | 0.8467 | 0.8467 | | 0.1932 | 169.23 | 6600 | 0.4137 | 0.8499 | 0.8499 | | 0.1907 | 174.36 | 6800 | 0.4101 | 0.8450 | 0.8450 | | 0.1927 | 179.49 | 7000 | 0.4092 | 0.8482 | 0.8483 | | 0.1898 | 184.62 | 7200 | 0.4150 | 0.8548 | 0.8548 | | 0.1835 | 189.74 | 7400 | 0.4322 | 0.8433 | 0.8434 | | 0.1822 | 194.87 | 7600 | 0.4188 | 0.8483 | 0.8483 | | 0.1804 | 200.0 | 7800 | 0.4215 | 0.8515 | 0.8515 | | 0.1778 | 205.13 | 8000 | 0.4222 | 0.8466 | 0.8467 | | 0.1769 | 210.26 | 8200 | 0.4239 | 0.8483 | 0.8483 | | 0.183 | 215.38 | 8400 | 0.4203 | 0.8434 | 0.8434 | | 0.1787 | 220.51 | 8600 | 0.4216 | 0.8515 | 0.8515 | | 0.1792 | 225.64 | 8800 | 0.4227 | 0.8499 | 0.8499 | | 0.178 | 230.77 | 9000 | 0.4221 | 0.8548 | 0.8548 | | 0.1732 | 235.9 | 9200 | 0.4266 | 0.8499 | 0.8499 | | 0.1747 | 241.03 | 9400 | 0.4287 | 0.8499 | 0.8499 | | 0.1734 | 246.15 | 9600 | 0.4266 | 0.8499 | 0.8499 | | 0.1716 | 251.28 | 9800 | 0.4281 | 0.8515 | 0.8515 | | 0.1705 | 256.41 | 10000 | 0.4283 | 0.8515 | 0.8515 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T06:57:54+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_tata-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset. It achieves the following results on the evaluation set: - Loss: 0.4128 - F1 Score: 0.8434 - Accuracy: 0.8434 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5692 | 5.13 | 200 | 0.5421 | 0.7380 | 0.7390 | | 0.4703 | 10.26 | 400 | 0.5086 | 0.7645 | 0.7667 | | 0.4096 | 15.38 | 600 | 0.4292 | 0.8042 | 0.8042 | | 0.3624 | 20.51 | 800 | 0.4130 | 0.8270 | 0.8271 | | 0.3192 | 25.64 | 1000 | 0.4094 | 0.8417 | 0.8418 | | 0.2901 | 30.77 | 1200 | 0.3982 | 0.8397 | 0.8401 | | 0.264 | 35.9 | 1400 | 0.3946 | 0.8434 | 0.8434 | | 0.2478 | 41.03 | 1600 | 0.4076 | 0.8433 | 0.8434 | | 0.2296 | 46.15 | 1800 | 0.3894 | 0.8515 | 0.8515 | | 0.2114 | 51.28 | 2000 | 0.4115 | 0.8548 | 0.8548 | | 0.2007 | 56.41 | 2200 | 0.4314 | 0.8467 | 0.8467 | | 0.1905 | 61.54 | 2400 | 0.4387 | 0.8385 | 0.8385 | | 0.1807 | 66.67 | 2600 | 0.4426 | 0.8531 | 0.8532 | | 0.1714 | 71.79 | 2800 | 0.4847 | 0.8417 | 0.8418 | | 0.1598 | 76.92 | 3000 | 0.5437 | 0.8302 | 0.8303 | | 0.1492 | 82.05 | 3200 | 0.5206 | 0.8383 | 0.8385 | | 0.1436 | 87.18 | 3400 | 0.5097 | 0.8384 | 0.8385 | | 0.1353 | 92.31 | 3600 | 0.5247 | 0.8483 | 0.8483 | | 0.1276 | 97.44 | 3800 | 0.5490 | 0.8467 | 0.8467 | | 0.1246 | 102.56 | 4000 | 0.5494 | 0.8433 | 0.8434 | | 0.1162 | 107.69 | 4200 | 0.5452 | 0.8433 | 0.8434 | | 0.1188 | 112.82 | 4400 | 0.5519 | 0.8384 | 0.8385 | | 0.1062 | 117.95 | 4600 | 0.5500 | 0.8401 | 0.8401 | | 0.102 | 123.08 | 4800 | 0.5665 | 0.8385 | 0.8385 | | 0.1 | 128.21 | 5000 | 0.5888 | 0.8385 | 0.8385 | | 0.0928 | 133.33 | 5200 | 0.6022 | 0.8401 | 0.8401 | | 0.0916 | 138.46 | 5400 | 0.6165 | 0.8450 | 0.8450 | | 0.0894 | 143.59 | 5600 | 0.6231 | 0.8466 | 0.8467 | | 0.0816 | 148.72 | 5800 | 0.6158 | 0.8385 | 0.8385 | | 0.0829 | 153.85 | 6000 | 0.6345 | 0.8368 | 0.8369 | | 0.0802 | 158.97 | 6200 | 0.6379 | 0.8303 | 0.8303 | | 0.0779 | 164.1 | 6400 | 0.6544 | 0.8448 | 0.8450 | | 0.0727 | 169.23 | 6600 | 0.6612 | 0.8385 | 0.8385 | | 0.0739 | 174.36 | 6800 | 0.6426 | 0.8450 | 0.8450 | | 0.0723 | 179.49 | 7000 | 0.6691 | 0.8385 | 0.8385 | | 0.0705 | 184.62 | 7200 | 0.6652 | 0.8433 | 0.8434 | | 0.0678 | 189.74 | 7400 | 0.6879 | 0.8416 | 0.8418 | | 0.0655 | 194.87 | 7600 | 0.6831 | 0.8351 | 0.8352 | | 0.065 | 200.0 | 7800 | 0.6698 | 0.8450 | 0.8450 | | 0.0652 | 205.13 | 8000 | 0.6868 | 0.8400 | 0.8401 | | 0.0619 | 210.26 | 8200 | 0.6964 | 0.8433 | 0.8434 | | 0.0622 | 215.38 | 8400 | 0.6994 | 0.8384 | 0.8385 | | 0.0624 | 220.51 | 8600 | 0.7099 | 0.8416 | 0.8418 | | 0.0607 | 225.64 | 8800 | 0.6958 | 0.8434 | 0.8434 | | 0.0613 | 230.77 | 9000 | 0.7013 | 0.8416 | 0.8418 | | 0.0552 | 235.9 | 9200 | 0.7090 | 0.8433 | 0.8434 | | 0.0562 | 241.03 | 9400 | 0.7224 | 0.8433 | 0.8434 | | 0.0572 | 246.15 | 9600 | 0.7151 | 0.8433 | 0.8434 | | 0.056 | 251.28 | 9800 | 0.7228 | 0.8433 | 0.8434 | | 0.0564 | 256.41 | 10000 | 0.7224 | 0.8466 | 0.8467 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T06:57:54+00:00
null
keras
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mindmate-f2-original-equal-cont-0-0 This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.40.0 - TensorFlow 2.15.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_keras_callback"], "base_model": "bert-base-german-cased", "model-index": [{"name": "mindmate-f2-original-equal-cont-0-0", "results": []}]}
spneshaei/mindmate-f2-original-equal-cont-0-0
null
[ "keras", "tf", "bert", "generated_from_keras_callback", "base_model:bert-base-german-cased", "license:mit", "region:us" ]
null
2024-04-27T06:58:16+00:00
null
keras
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mindmate-f1-original-equal-cont-0-0 This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.40.0 - TensorFlow 2.15.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_keras_callback"], "base_model": "bert-base-german-cased", "model-index": [{"name": "mindmate-f1-original-equal-cont-0-0", "results": []}]}
spneshaei/mindmate-f1-original-equal-cont-0-0
null
[ "keras", "tf", "bert", "generated_from_keras_callback", "base_model:bert-base-german-cased", "license:mit", "region:us" ]
null
2024-04-27T06:58:44+00:00
text-generation
transformers
{}
delphi-suite/stories-llama2-500k
null
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:59:07+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_all-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.2225 - F1 Score: 0.9096 - Accuracy: 0.9096 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.4379 | 0.54 | 200 | 0.3007 | 0.8866 | 0.8867 | | 0.3184 | 1.08 | 400 | 0.2642 | 0.8973 | 0.8973 | | 0.2875 | 1.62 | 600 | 0.2449 | 0.9024 | 0.9024 | | 0.2629 | 2.16 | 800 | 0.2442 | 0.9072 | 0.9073 | | 0.2575 | 2.7 | 1000 | 0.2298 | 0.9108 | 0.9108 | | 0.2475 | 3.24 | 1200 | 0.2315 | 0.9093 | 0.9093 | | 0.2454 | 3.78 | 1400 | 0.2247 | 0.9106 | 0.9106 | | 0.2432 | 4.32 | 1600 | 0.2197 | 0.9143 | 0.9144 | | 0.2352 | 4.86 | 1800 | 0.2212 | 0.9132 | 0.9132 | | 0.233 | 5.41 | 2000 | 0.2176 | 0.9137 | 0.9137 | | 0.2356 | 5.95 | 2200 | 0.2174 | 0.9125 | 0.9125 | | 0.2291 | 6.49 | 2400 | 0.2153 | 0.9128 | 0.9128 | | 0.2303 | 7.03 | 2600 | 0.2161 | 0.9133 | 0.9133 | | 0.2246 | 7.57 | 2800 | 0.2144 | 0.9120 | 0.9120 | | 0.224 | 8.11 | 3000 | 0.2101 | 0.9142 | 0.9142 | | 0.2251 | 8.65 | 3200 | 0.2094 | 0.9164 | 0.9164 | | 0.2154 | 9.19 | 3400 | 0.2082 | 0.9176 | 0.9176 | | 0.2188 | 9.73 | 3600 | 0.2078 | 0.9154 | 0.9154 | | 0.2238 | 10.27 | 3800 | 0.2072 | 0.9165 | 0.9166 | | 0.2186 | 10.81 | 4000 | 0.2061 | 0.9147 | 0.9147 | | 0.2214 | 11.35 | 4200 | 0.2097 | 0.9148 | 0.9149 | | 0.2135 | 11.89 | 4400 | 0.2059 | 0.9154 | 0.9154 | | 0.2144 | 12.43 | 4600 | 0.2052 | 0.9165 | 0.9166 | | 0.2149 | 12.97 | 4800 | 0.2025 | 0.9176 | 0.9176 | | 0.212 | 13.51 | 5000 | 0.2044 | 0.9164 | 0.9164 | | 0.2149 | 14.05 | 5200 | 0.2033 | 0.9162 | 0.9162 | | 0.2102 | 14.59 | 5400 | 0.2039 | 0.9170 | 0.9171 | | 0.2117 | 15.14 | 5600 | 0.2040 | 0.9165 | 0.9166 | | 0.209 | 15.68 | 5800 | 0.2014 | 0.9176 | 0.9176 | | 0.2135 | 16.22 | 6000 | 0.2052 | 0.9175 | 0.9176 | | 0.2116 | 16.76 | 6200 | 0.2017 | 0.9177 | 0.9177 | | 0.208 | 17.3 | 6400 | 0.1999 | 0.9199 | 0.9199 | | 0.2115 | 17.84 | 6600 | 0.2012 | 0.9175 | 0.9176 | | 0.2031 | 18.38 | 6800 | 0.2025 | 0.9182 | 0.9182 | | 0.2131 | 18.92 | 7000 | 0.1985 | 0.9191 | 0.9191 | | 0.2085 | 19.46 | 7200 | 0.1996 | 0.9187 | 0.9187 | | 0.2059 | 20.0 | 7400 | 0.1986 | 0.9192 | 0.9193 | | 0.2086 | 20.54 | 7600 | 0.1989 | 0.9181 | 0.9181 | | 0.207 | 21.08 | 7800 | 0.1980 | 0.9186 | 0.9186 | | 0.2057 | 21.62 | 8000 | 0.1992 | 0.9184 | 0.9184 | | 0.2078 | 22.16 | 8200 | 0.1983 | 0.9184 | 0.9184 | | 0.2017 | 22.7 | 8400 | 0.1978 | 0.9184 | 0.9184 | | 0.2079 | 23.24 | 8600 | 0.1978 | 0.9184 | 0.9184 | | 0.2038 | 23.78 | 8800 | 0.1981 | 0.9177 | 0.9177 | | 0.2072 | 24.32 | 9000 | 0.1972 | 0.9187 | 0.9187 | | 0.206 | 24.86 | 9200 | 0.1978 | 0.9187 | 0.9187 | | 0.2034 | 25.41 | 9400 | 0.1970 | 0.9191 | 0.9191 | | 0.2049 | 25.95 | 9600 | 0.1973 | 0.9191 | 0.9191 | | 0.2058 | 26.49 | 9800 | 0.1975 | 0.9191 | 0.9191 | | 0.2041 | 27.03 | 10000 | 0.1973 | 0.9186 | 0.9186 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T07:00:10+00:00
null
null
{}
ai-tools-searchs/glass
null
[ "region:us" ]
null
2024-04-27T07:00:20+00:00
text-generation
transformers
{}
delphi-suite/stories-mamba-500k
null
[ "transformers", "safetensors", "mamba", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T07:00:45+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_all-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.2074 - F1 Score: 0.9167 - Accuracy: 0.9167 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.3768 | 0.54 | 200 | 0.2522 | 0.9005 | 0.9005 | | 0.2656 | 1.08 | 400 | 0.2300 | 0.9103 | 0.9103 | | 0.2466 | 1.62 | 600 | 0.2184 | 0.9137 | 0.9137 | | 0.2294 | 2.16 | 800 | 0.2176 | 0.9123 | 0.9123 | | 0.2297 | 2.7 | 1000 | 0.2088 | 0.9175 | 0.9176 | | 0.2193 | 3.24 | 1200 | 0.2132 | 0.9116 | 0.9117 | | 0.2182 | 3.78 | 1400 | 0.2069 | 0.9170 | 0.9171 | | 0.2172 | 4.32 | 1600 | 0.1972 | 0.9221 | 0.9221 | | 0.2089 | 4.86 | 1800 | 0.2019 | 0.9180 | 0.9181 | | 0.2092 | 5.41 | 2000 | 0.1964 | 0.9228 | 0.9228 | | 0.2096 | 5.95 | 2200 | 0.1939 | 0.9223 | 0.9223 | | 0.2031 | 6.49 | 2400 | 0.1931 | 0.9225 | 0.9225 | | 0.2046 | 7.03 | 2600 | 0.1918 | 0.9240 | 0.9240 | | 0.1968 | 7.57 | 2800 | 0.1901 | 0.9235 | 0.9235 | | 0.2004 | 8.11 | 3000 | 0.1894 | 0.9250 | 0.925 | | 0.1975 | 8.65 | 3200 | 0.1894 | 0.9226 | 0.9226 | | 0.1893 | 9.19 | 3400 | 0.1895 | 0.9242 | 0.9242 | | 0.1927 | 9.73 | 3600 | 0.1873 | 0.9253 | 0.9253 | | 0.1989 | 10.27 | 3800 | 0.1852 | 0.9243 | 0.9243 | | 0.1938 | 10.81 | 4000 | 0.1846 | 0.925 | 0.925 | | 0.1954 | 11.35 | 4200 | 0.1830 | 0.9258 | 0.9258 | | 0.1868 | 11.89 | 4400 | 0.1856 | 0.9245 | 0.9245 | | 0.1888 | 12.43 | 4600 | 0.1823 | 0.9252 | 0.9252 | | 0.1876 | 12.97 | 4800 | 0.1835 | 0.9235 | 0.9235 | | 0.1858 | 13.51 | 5000 | 0.1837 | 0.9238 | 0.9238 | | 0.1873 | 14.05 | 5200 | 0.1863 | 0.9252 | 0.9252 | | 0.1801 | 14.59 | 5400 | 0.1864 | 0.9231 | 0.9231 | | 0.1864 | 15.14 | 5600 | 0.1840 | 0.9242 | 0.9242 | | 0.1823 | 15.68 | 5800 | 0.1830 | 0.9257 | 0.9257 | | 0.184 | 16.22 | 6000 | 0.1865 | 0.9233 | 0.9233 | | 0.1828 | 16.76 | 6200 | 0.1823 | 0.9260 | 0.9260 | | 0.1791 | 17.3 | 6400 | 0.1816 | 0.9267 | 0.9267 | | 0.1816 | 17.84 | 6600 | 0.1815 | 0.9265 | 0.9265 | | 0.1747 | 18.38 | 6800 | 0.1831 | 0.9258 | 0.9258 | | 0.1827 | 18.92 | 7000 | 0.1793 | 0.9285 | 0.9285 | | 0.1799 | 19.46 | 7200 | 0.1800 | 0.9272 | 0.9272 | | 0.1778 | 20.0 | 7400 | 0.1806 | 0.9289 | 0.9289 | | 0.1809 | 20.54 | 7600 | 0.1797 | 0.9270 | 0.9270 | | 0.1792 | 21.08 | 7800 | 0.1781 | 0.9272 | 0.9272 | | 0.1782 | 21.62 | 8000 | 0.1802 | 0.9265 | 0.9265 | | 0.1765 | 22.16 | 8200 | 0.1792 | 0.9265 | 0.9265 | | 0.1735 | 22.7 | 8400 | 0.1797 | 0.9274 | 0.9274 | | 0.1783 | 23.24 | 8600 | 0.1792 | 0.9270 | 0.9270 | | 0.1756 | 23.78 | 8800 | 0.1794 | 0.9277 | 0.9277 | | 0.1784 | 24.32 | 9000 | 0.1799 | 0.9274 | 0.9274 | | 0.176 | 24.86 | 9200 | 0.1796 | 0.9269 | 0.9269 | | 0.1736 | 25.41 | 9400 | 0.1802 | 0.9265 | 0.9265 | | 0.1753 | 25.95 | 9600 | 0.1796 | 0.9267 | 0.9267 | | 0.1756 | 26.49 | 9800 | 0.1793 | 0.9272 | 0.9272 | | 0.1741 | 27.03 | 10000 | 0.1795 | 0.9270 | 0.9270 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T07:02:01+00:00
null
null
{}
Hricha/A
null
[ "region:us" ]
null
2024-04-27T07:02:09+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
swj0419/hp_retrain_STEP0000010
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T07:02:14+00:00
text-generation
transformers
{}
delphi-suite/stories-llama2-1m
null
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T07:02:29+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_all-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.2079 - F1 Score: 0.9181 - Accuracy: 0.9181 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.3369 | 0.54 | 200 | 0.2350 | 0.9071 | 0.9071 | | 0.2434 | 1.08 | 400 | 0.2176 | 0.9145 | 0.9145 | | 0.2318 | 1.62 | 600 | 0.2080 | 0.9177 | 0.9177 | | 0.2171 | 2.16 | 800 | 0.2050 | 0.9177 | 0.9177 | | 0.2172 | 2.7 | 1000 | 0.2024 | 0.9184 | 0.9184 | | 0.2068 | 3.24 | 1200 | 0.2025 | 0.9177 | 0.9177 | | 0.2048 | 3.78 | 1400 | 0.1906 | 0.9223 | 0.9223 | | 0.2031 | 4.32 | 1600 | 0.1847 | 0.9262 | 0.9262 | | 0.1952 | 4.86 | 1800 | 0.1869 | 0.9253 | 0.9253 | | 0.1941 | 5.41 | 2000 | 0.1871 | 0.9267 | 0.9267 | | 0.1946 | 5.95 | 2200 | 0.1832 | 0.9284 | 0.9284 | | 0.1894 | 6.49 | 2400 | 0.1839 | 0.9269 | 0.9269 | | 0.1905 | 7.03 | 2600 | 0.1850 | 0.9289 | 0.9289 | | 0.1821 | 7.57 | 2800 | 0.1778 | 0.9280 | 0.9280 | | 0.1853 | 8.11 | 3000 | 0.1800 | 0.9289 | 0.9289 | | 0.1807 | 8.65 | 3200 | 0.1812 | 0.9280 | 0.9280 | | 0.1736 | 9.19 | 3400 | 0.1805 | 0.9257 | 0.9257 | | 0.1766 | 9.73 | 3600 | 0.1799 | 0.9285 | 0.9285 | | 0.1827 | 10.27 | 3800 | 0.1775 | 0.9284 | 0.9284 | | 0.1774 | 10.81 | 4000 | 0.1774 | 0.9292 | 0.9292 | | 0.1774 | 11.35 | 4200 | 0.1733 | 0.9309 | 0.9309 | | 0.1693 | 11.89 | 4400 | 0.1820 | 0.9311 | 0.9311 | | 0.1712 | 12.43 | 4600 | 0.1738 | 0.9309 | 0.9309 | | 0.1698 | 12.97 | 4800 | 0.1785 | 0.9294 | 0.9294 | | 0.1659 | 13.51 | 5000 | 0.1757 | 0.9306 | 0.9306 | | 0.1695 | 14.05 | 5200 | 0.1846 | 0.9253 | 0.9253 | | 0.1606 | 14.59 | 5400 | 0.1814 | 0.9314 | 0.9314 | | 0.1674 | 15.14 | 5600 | 0.1761 | 0.9314 | 0.9314 | | 0.1612 | 15.68 | 5800 | 0.1762 | 0.9302 | 0.9302 | | 0.1646 | 16.22 | 6000 | 0.1786 | 0.9296 | 0.9296 | | 0.1626 | 16.76 | 6200 | 0.1764 | 0.9311 | 0.9311 | | 0.1594 | 17.3 | 6400 | 0.1744 | 0.9319 | 0.9319 | | 0.1593 | 17.84 | 6600 | 0.1757 | 0.9312 | 0.9313 | | 0.1544 | 18.38 | 6800 | 0.1790 | 0.9321 | 0.9321 | | 0.1591 | 18.92 | 7000 | 0.1724 | 0.9341 | 0.9341 | | 0.1581 | 19.46 | 7200 | 0.1749 | 0.9334 | 0.9334 | | 0.1554 | 20.0 | 7400 | 0.1751 | 0.9341 | 0.9341 | | 0.1573 | 20.54 | 7600 | 0.1743 | 0.9343 | 0.9343 | | 0.1574 | 21.08 | 7800 | 0.1721 | 0.9346 | 0.9346 | | 0.1557 | 21.62 | 8000 | 0.1741 | 0.9341 | 0.9341 | | 0.1523 | 22.16 | 8200 | 0.1740 | 0.9338 | 0.9338 | | 0.1492 | 22.7 | 8400 | 0.1747 | 0.9346 | 0.9346 | | 0.1529 | 23.24 | 8600 | 0.1745 | 0.9353 | 0.9353 | | 0.1518 | 23.78 | 8800 | 0.1750 | 0.9338 | 0.9338 | | 0.154 | 24.32 | 9000 | 0.1749 | 0.9326 | 0.9326 | | 0.1492 | 24.86 | 9200 | 0.1765 | 0.9341 | 0.9341 | | 0.1472 | 25.41 | 9400 | 0.1763 | 0.9340 | 0.9340 | | 0.1504 | 25.95 | 9600 | 0.1755 | 0.9350 | 0.9350 | | 0.15 | 26.49 | 9800 | 0.1749 | 0.9350 | 0.9350 | | 0.1477 | 27.03 | 10000 | 0.1752 | 0.9353 | 0.9353 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T07:03:28+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K14ac-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.5340 - F1 Score: 0.7309 - Accuracy: 0.7295 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6157 | 0.97 | 200 | 0.5936 | 0.6832 | 0.6820 | | 0.5787 | 1.93 | 400 | 0.5774 | 0.7012 | 0.6992 | | 0.5725 | 2.9 | 600 | 0.5869 | 0.6981 | 0.6974 | | 0.5631 | 3.86 | 800 | 0.5446 | 0.7278 | 0.7277 | | 0.556 | 4.83 | 1000 | 0.5833 | 0.7042 | 0.7038 | | 0.5518 | 5.8 | 1200 | 0.5797 | 0.7083 | 0.7077 | | 0.5492 | 6.76 | 1400 | 0.5688 | 0.7111 | 0.7101 | | 0.5467 | 7.73 | 1600 | 0.5496 | 0.7243 | 0.7225 | | 0.5411 | 8.7 | 1800 | 0.5540 | 0.7177 | 0.7162 | | 0.54 | 9.66 | 2000 | 0.5553 | 0.7220 | 0.7204 | | 0.5427 | 10.63 | 2200 | 0.5834 | 0.6982 | 0.6986 | | 0.5341 | 11.59 | 2400 | 0.5457 | 0.7267 | 0.7250 | | 0.5362 | 12.56 | 2600 | 0.5672 | 0.7142 | 0.7132 | | 0.5344 | 13.53 | 2800 | 0.5681 | 0.7129 | 0.7120 | | 0.535 | 14.49 | 3000 | 0.5910 | 0.6995 | 0.7005 | | 0.5305 | 15.46 | 3200 | 0.5434 | 0.7292 | 0.7274 | | 0.5298 | 16.43 | 3400 | 0.5669 | 0.7112 | 0.7107 | | 0.5307 | 17.39 | 3600 | 0.5580 | 0.7189 | 0.7177 | | 0.5299 | 18.36 | 3800 | 0.5393 | 0.7359 | 0.7340 | | 0.5277 | 19.32 | 4000 | 0.5525 | 0.7239 | 0.7225 | | 0.5266 | 20.29 | 4200 | 0.5531 | 0.7243 | 0.7228 | | 0.5261 | 21.26 | 4400 | 0.5635 | 0.7198 | 0.7189 | | 0.5255 | 22.22 | 4600 | 0.5694 | 0.7111 | 0.7107 | | 0.5252 | 23.19 | 4800 | 0.5419 | 0.7309 | 0.7292 | | 0.5242 | 24.15 | 5000 | 0.5463 | 0.7265 | 0.7250 | | 0.5224 | 25.12 | 5200 | 0.5664 | 0.7154 | 0.7147 | | 0.5213 | 26.09 | 5400 | 0.5544 | 0.7244 | 0.7231 | | 0.5229 | 27.05 | 5600 | 0.5730 | 0.7142 | 0.7141 | | 0.5239 | 28.02 | 5800 | 0.5405 | 0.7300 | 0.7283 | | 0.5233 | 28.99 | 6000 | 0.5629 | 0.7161 | 0.7156 | | 0.5191 | 29.95 | 6200 | 0.5702 | 0.7173 | 0.7168 | | 0.5202 | 30.92 | 6400 | 0.5472 | 0.7250 | 0.7234 | | 0.5198 | 31.88 | 6600 | 0.5564 | 0.7202 | 0.7192 | | 0.5165 | 32.85 | 6800 | 0.5594 | 0.7205 | 0.7195 | | 0.5237 | 33.82 | 7000 | 0.5677 | 0.7143 | 0.7141 | | 0.5183 | 34.78 | 7200 | 0.5645 | 0.7183 | 0.7177 | | 0.5191 | 35.75 | 7400 | 0.5594 | 0.7200 | 0.7189 | | 0.5168 | 36.71 | 7600 | 0.5539 | 0.7225 | 0.7213 | | 0.5178 | 37.68 | 7800 | 0.5543 | 0.7236 | 0.7225 | | 0.5161 | 38.65 | 8000 | 0.5436 | 0.7256 | 0.7241 | | 0.5238 | 39.61 | 8200 | 0.5571 | 0.7220 | 0.7210 | | 0.5127 | 40.58 | 8400 | 0.5669 | 0.7167 | 0.7162 | | 0.5149 | 41.55 | 8600 | 0.5546 | 0.7231 | 0.7219 | | 0.5163 | 42.51 | 8800 | 0.5609 | 0.7198 | 0.7189 | | 0.5192 | 43.48 | 9000 | 0.5633 | 0.7206 | 0.7198 | | 0.5169 | 44.44 | 9200 | 0.5575 | 0.7223 | 0.7213 | | 0.519 | 45.41 | 9400 | 0.5537 | 0.7212 | 0.7201 | | 0.511 | 46.38 | 9600 | 0.5605 | 0.7222 | 0.7213 | | 0.5194 | 47.34 | 9800 | 0.5564 | 0.7223 | 0.7213 | | 0.515 | 48.31 | 10000 | 0.5546 | 0.7221 | 0.7210 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T07:03:28+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # vsufiy/my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0615 - Validation Loss: 0.2217 - Train Accuracy: 0.9327 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.2575 | 0.1853 | 0.9286 | 0 | | 0.1342 | 0.1917 | 0.9278 | 1 | | 0.0615 | 0.2217 | 0.9327 | 2 | ### Framework versions - Transformers 4.40.0 - TensorFlow 2.15.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "vsufiy/my_awesome_model", "results": []}]}
vsufiy/my_awesome_model
null
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T07:03:43+00:00
text-generation
transformers
{}
delphi-suite/stories-mamba-1m
null
[ "transformers", "safetensors", "mamba", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T07:04:04+00:00
null
null
Apakah Optalite Tablet? Optalite harga ialah kapsul suplemen pemakanan yang direka khas untuk memberikan sokongan menyeluruh untuk kesihatan mata. Formula termajunya mengandungi gabungan sinergistik vitamin, mineral dan antioksidan, dipilih dengan teliti untuk menyuburkan dan melindungi mata daripada degenerasi yang berkaitan dengan usia dan tekanan alam sekitar. Laman web rasmi:<a href="https://www.nutritionsee.com/optaitmalay">www.Optalite.com</a> <p><a href="https://www.nutritionsee.com/optaitmalay"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/04/Optalite-Malaysia.png" alt="enter image description here"> </a></p> <a href="https://www.nutritionsee.com/optaitmalay">Beli sekarang!! Klik pautan di bawah untuk maklumat lanjut dan dapatkan diskaun 50% sekarang... Cepat</a> Laman web rasmi:<a href="https://www.nutritionsee.com/optaitmalay">www.Optalite.com</a>
{"license": "apache-2.0"}
Optalite/Optalite
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-27T07:05:48+00:00
null
keras
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mindmate-f5-original-equal-cont-0-0 This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.40.0 - TensorFlow 2.15.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_keras_callback"], "base_model": "bert-base-german-cased", "model-index": [{"name": "mindmate-f5-original-equal-cont-0-0", "results": []}]}
spneshaei/mindmate-f5-original-equal-cont-0-0
null
[ "keras", "tf", "bert", "generated_from_keras_callback", "base_model:bert-base-german-cased", "license:mit", "region:us" ]
null
2024-04-27T07:05:54+00:00
automatic-speech-recognition
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Mihaj/whisper-medium-karelian-CodeSwitching_with_tempo_aug
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T07:06:02+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-finetuned-en-to-ja-eval1 This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.3092 - eval_bleu: 0.0 - eval_gen_len: 3.008 - eval_runtime: 2.2634 - eval_samples_per_second: 220.911 - eval_steps_per_second: 4.86 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "t5-base", "model-index": [{"name": "t5-finetuned-en-to-ja-eval1", "results": []}]}
tsetsuuhei/t5-finetuned-en-to-ja-eval1
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:t5-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T07:06:03+00:00
null
keras
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mindmate-f3-original-equal-cont-0-0 This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.40.0 - TensorFlow 2.15.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_keras_callback"], "base_model": "bert-base-german-cased", "model-index": [{"name": "mindmate-f3-original-equal-cont-0-0", "results": []}]}
spneshaei/mindmate-f3-original-equal-cont-0-0
null
[ "keras", "tf", "bert", "generated_from_keras_callback", "base_model:bert-base-german-cased", "license:mit", "region:us" ]
null
2024-04-27T07:06:04+00:00
null
null
{}
thusinh1969/LLaMA-2-finetune-EP2-25APRIL2024-Q4_K_M.gguf
null
[ "gguf", "region:us" ]
null
2024-04-27T07:06:42+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth", "trl", "sft"]}
NiCoSav/llama-3-8b-bnb-4bit
null
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "unsloth", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-27T07:06:58+00:00
null
keras
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mindmate-f4-original-equal-cont-0-0 This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.40.0 - TensorFlow 2.15.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_keras_callback"], "base_model": "bert-base-german-cased", "model-index": [{"name": "mindmate-f4-original-equal-cont-0-0", "results": []}]}
spneshaei/mindmate-f4-original-equal-cont-0-0
null
[ "keras", "tf", "bert", "generated_from_keras_callback", "base_model:bert-base-german-cased", "license:mit", "region:us" ]
null
2024-04-27T07:07:06+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth", "trl", "sft"]}
NiCoSav/llama-3-8b-bnb-16bit
null
[ "transformers", "pytorch", "llama", "text-generation", "unsloth", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T07:07:15+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
swj0419/hp_retrain_STEP0000020
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T07:09:45+00:00
null
null
# Model Card for deepseek-coder-33b-instruct-pythagora This model card describes the deepseek-coder-33b-instruct-pythagora model, which is a fine-tuned version of the DeepSeek Coder 33B Instruct model, specifically optimized for use with the Pythagora GPT Pilot application. ## Model Details ### Model Description - **Developed by:** LoupGarou (GitHub: [MoonlightByte](https://github.com/MoonlightByte)) - **Model type:** Causal language model - **Language(s) (NLP):** English - **License:** DeepSeek Coder Model License - **Finetuned from model:** [DeepSeek Coder 33B Instruct](https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct) ### Model Sources - **Repository:** [LoupGarou/deepseek-coder-33b-instruct-pythagora-gguf](https://huggingface.co/LoupGarou/deepseek-coder-33b-instruct-pythagora-gguf) - **GitHub Repository (Proxy Application):** [MoonlightByte/Pythagora-LLM-Proxy](https://github.com/MoonlightByte/Pythagora-LLM-Proxy) - **Original Model Repository:** [DeepSeek Coder](https://github.com/deepseek-ai/deepseek-coder) ## Uses ### Direct Use This model is intended for use with the [Pythagora GPT Pilot](https://github.com/Pythagora-io/gpt-pilot) application, which enables the creation of fully working, production-ready apps with the assistance of a developer. The model has been fine-tuned to work seamlessly with the GPT Pilot prompt structures and can be utilized through the [Pythagora LLM Proxy](https://github.com/MoonlightByte/Pythagora-LLM-Proxy). The model is designed to generate code and assist with various programming tasks, such as writing features, debugging, and providing code reviews, all within the context of the Pythagora GPT Pilot application. ### Out-of-Scope Use This model should not be used for tasks outside of the intended use case with the Pythagora GPT Pilot application. It is not designed for standalone use or integration with other applications without proper testing and adaptation. Additionally, the model should not be used for generating content related to sensitive topics, such as politics, security, or privacy issues, as it is specifically trained to focus on computer science and programming-related tasks. ## Bias, Risks, and Limitations As with any language model, there may be biases present in the training data that could be reflected in the model's outputs. Users should be aware of potential limitations and biases when using this model. The model's performance may be impacted by the quality and relevance of the input prompts, as well as the specific programming languages and frameworks used in the context of the Pythagora GPT Pilot application. ### Recommendations Users should familiarize themselves with the [Pythagora GPT Pilot](https://github.com/Pythagora-io/gpt-pilot) application and its intended use cases before utilizing this model. It is recommended to use the model in conjunction with the [Pythagora LLM Proxy](https://github.com/MoonlightByte/Pythagora-LLM-Proxy) for optimal performance and compatibility. When using the model, users should carefully review and test the generated code to ensure its correctness, efficiency, and adherence to best practices and project requirements. ## How to Get Started with the Model To use this model with the Pythagora GPT Pilot application: 1. Set up the Pythagora LLM Proxy by following the instructions in the [GitHub repository](https://github.com/MoonlightByte/Pythagora-LLM-Proxy). 2. Configure GPT Pilot to use the proxy by setting the OpenAI API endpoint to `http://localhost:8080/v1/chat/completions`. 3. Run GPT Pilot as usual, and the proxy will handle the communication between GPT Pilot and the deepseek-coder-6.7b-instruct-pythagora model. 4. It is possible to run Pythagora directly to LM Studio or any other service with mixed results since these models were not finetuned using a chat format. For more detailed instructions and examples, please refer to the [Pythagora LLM Proxy README](https://github.com/MoonlightByte/Pythagora-LLM-Proxy/blob/main/README.md). ## Training Details ### Training Data The model was fine-tuned using a custom dataset created from sample prompts generated by the Pythagora prompt structures. The prompts are compatible with the version described in the [Pythagora README](https://github.com/Pythagora-io/gpt-pilot/blob/main/README.md). The dataset was carefully curated to ensure high-quality examples and a diverse range of programming tasks relevant to the Pythagora GPT Pilot application. ### Training Procedure The model was fine-tuned using the training scripts and resources provided in the [DeepSeek Coder GitHub repository](https://github.com/deepseek-ai/DeepSeek-Coder.git). Specifically, the [finetune/finetune_deepseekcoder.py](https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/finetune/finetune_deepseekcoder.py) script was used to perform the fine-tuning process. The model was trained using PEFT with a maximum sequence length of 9,000 tokens, utilizing the custom dataset to adapt the base DeepSeek Coder 33B Instruct model to the specific requirements and prompt structures of the Pythagora GPT Pilot application. The training process leveraged state-of-the-art techniques and hardware, including DeepSpeed integration for efficient distributed training, to ensure optimal performance and compatibility with the target application. For detailed information on the training procedure, including the specific hyperparameters and configurations used, please refer to the [DeepSeek Coder Fine-tuning Documentation](https://github.com/deepseek-ai/DeepSeek-Coder#how-to-fine-tune-deepseek-coder). ## Model Examination No additional interpretability work has been performed on this model. However, the model's performance has been thoroughly tested and validated within the context of the Pythagora GPT Pilot application to ensure its effectiveness in generating high-quality code and assisting with programming tasks. ## Environmental Impact The environmental impact of this model has not been assessed. More information is needed to estimate the carbon emissions and electricity usage associated with the model's training and deployment. As a general recommendation, users should strive to utilize the model efficiently and responsibly to minimize any potential environmental impact. ## Technical Specifications - **Model Architecture:** The model architecture is based on the DeepSeek Coder 33B Instruct model, which is a transformer-based causal language model optimized for code generation and understanding. - **Compute Infrastructure:** The model was fine-tuned using high-performance computing resources, including GPUs, to ensure efficient and timely training. The exact specifications of the compute infrastructure used for training are not publicly disclosed. ## Citation **APA:** LoupGarou. (2024). deepseek-coder-33b-instruct-pythagora (Model). https://huggingface.co/LoupGarou/deepseek-coder-33b-instruct-pythagora ## Model Card Contact For questions, feedback, or concerns regarding this model, please contact LoupGarou through the GitHub repository: [MoonlightByte/Pythagora-LLM-Proxy](https://github.com/MoonlightByte/Pythagora-LLM-Proxy). You can open an issue or submit a pull request to discuss any aspects of the model or its usage within the Pythagora GPT Pilot application. **Original model card: DeepSeek's Deepseek Coder 33B Instruct** **[🏠Homepage](https://www.deepseek.com/)** | **[🤖 Chat with DeepSeek Coder](https://coder.deepseek.com/)** | **[Discord](https://discord.gg/Tc7c45Zzu5)** | **[Wechat(微信)](https://github.com/guoday/assert/blob/main/QR.png?raw=true)** --- ### 1. Introduction of Deepseek Coder Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks. - **Massive Training Data**: Trained from scratch fon 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages. - **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements. - **Superior Model Performance**: State-of-the-art performance among publicly available code models on HumanEval, MultiPL-E, MBPP, DS-1000, and APPS benchmarks. - **Advanced Code Completion Capabilities**: A window size of 16K and a fill-in-the-blank task, supporting project-level code completion and infilling tasks. ### 2. Model Summary deepseek-coder-33b-instruct is a 33B parameter model initialized from deepseek-coder-33b-base and fine-tuned on 2B tokens of instruction data. - **Home Page:** [DeepSeek](https://www.deepseek.com/) - **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder) - **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/) ### 3. How to Use Here give some examples of how to use our model. #### Chat Model Inference ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True).cuda() messages=[ { 'role': 'user', 'content': "write a quick sort algorithm in python."} ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device) # 32021 is the id of <|EOT|> token outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=32021) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ### 4. License This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use. See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details. ### 5. Contact If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
{}
LoupGarou/deepseek-coder-33b-instruct-pythagora-gguf
null
[ "gguf", "region:us" ]
null
2024-04-27T07:13:10+00:00
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Jurij1/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.50 +/- 2.73", "name": "mean_reward", "verified": false}]}]}]}
Jurij1/q-Taxi-v3
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-27T07:13:26+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kaist-mistral-orpo-OHP-15k-Mathcode This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the orpo-explorers/OHP-15k-mathcode dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2.post303 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "orpo", "generated_from_trainer", "trl", "orpo", "generated_from_trainer"], "datasets": ["orpo-explorers/OHP-15k-mathcode"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "kaist-mistral-orpo-OHP-15k-Mathcode", "results": []}]}
orpo-explorers/kaist-mistral-orpo-OHP-15k-Mathcode
null
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "orpo", "generated_from_trainer", "conversational", "dataset:orpo-explorers/OHP-15k-mathcode", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T07:13:37+00:00
null
null
{}
Anna15/sn25-3-3
null
[ "region:us" ]
null
2024-04-27T07:13:48+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K14ac-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.5250 - F1 Score: 0.7517 - Accuracy: 0.7504 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6011 | 0.97 | 200 | 0.5808 | 0.6964 | 0.6947 | | 0.5623 | 1.93 | 400 | 0.5564 | 0.7202 | 0.7183 | | 0.5528 | 2.9 | 600 | 0.5929 | 0.6877 | 0.6890 | | 0.5455 | 3.86 | 800 | 0.5319 | 0.7384 | 0.7371 | | 0.5385 | 4.83 | 1000 | 0.5795 | 0.7053 | 0.7053 | | 0.5367 | 5.8 | 1200 | 0.5689 | 0.7086 | 0.7083 | | 0.5306 | 6.76 | 1400 | 0.5359 | 0.7301 | 0.7283 | | 0.5285 | 7.73 | 1600 | 0.5372 | 0.7370 | 0.7352 | | 0.5213 | 8.7 | 1800 | 0.5544 | 0.7177 | 0.7168 | | 0.5192 | 9.66 | 2000 | 0.5565 | 0.7216 | 0.7207 | | 0.5212 | 10.63 | 2200 | 0.5757 | 0.7081 | 0.7086 | | 0.5101 | 11.59 | 2400 | 0.5296 | 0.7416 | 0.7398 | | 0.5124 | 12.56 | 2600 | 0.5613 | 0.7205 | 0.7198 | | 0.5097 | 13.53 | 2800 | 0.5587 | 0.7197 | 0.7189 | | 0.5089 | 14.49 | 3000 | 0.5724 | 0.7127 | 0.7126 | | 0.5033 | 15.46 | 3200 | 0.5293 | 0.7413 | 0.7395 | | 0.5041 | 16.43 | 3400 | 0.5549 | 0.7213 | 0.7207 | | 0.5021 | 17.39 | 3600 | 0.5424 | 0.7313 | 0.7298 | | 0.5011 | 18.36 | 3800 | 0.5222 | 0.7497 | 0.7480 | | 0.4981 | 19.32 | 4000 | 0.5401 | 0.7370 | 0.7356 | | 0.4958 | 20.29 | 4200 | 0.5409 | 0.7402 | 0.7386 | | 0.4955 | 21.26 | 4400 | 0.5610 | 0.7248 | 0.7241 | | 0.4913 | 22.22 | 4600 | 0.5626 | 0.7213 | 0.7207 | | 0.4939 | 23.19 | 4800 | 0.5332 | 0.7457 | 0.7440 | | 0.4898 | 24.15 | 5000 | 0.5490 | 0.7307 | 0.7295 | | 0.4909 | 25.12 | 5200 | 0.5706 | 0.7225 | 0.7222 | | 0.4869 | 26.09 | 5400 | 0.5599 | 0.7272 | 0.7265 | | 0.488 | 27.05 | 5600 | 0.5888 | 0.7138 | 0.7144 | | 0.4884 | 28.02 | 5800 | 0.5354 | 0.7405 | 0.7389 | | 0.4872 | 28.99 | 6000 | 0.5622 | 0.7210 | 0.7207 | | 0.4831 | 29.95 | 6200 | 0.5666 | 0.7272 | 0.7265 | | 0.483 | 30.92 | 6400 | 0.5294 | 0.7512 | 0.7495 | | 0.4829 | 31.88 | 6600 | 0.5467 | 0.7330 | 0.7316 | | 0.477 | 32.85 | 6800 | 0.5659 | 0.7268 | 0.7262 | | 0.4866 | 33.82 | 7000 | 0.5629 | 0.7223 | 0.7219 | | 0.4802 | 34.78 | 7200 | 0.5777 | 0.7170 | 0.7171 | | 0.4796 | 35.75 | 7400 | 0.5524 | 0.7372 | 0.7359 | | 0.4774 | 36.71 | 7600 | 0.5579 | 0.7274 | 0.7265 | | 0.478 | 37.68 | 7800 | 0.5509 | 0.7292 | 0.7280 | | 0.4752 | 38.65 | 8000 | 0.5454 | 0.7382 | 0.7368 | | 0.484 | 39.61 | 8200 | 0.5533 | 0.7299 | 0.7289 | | 0.4721 | 40.58 | 8400 | 0.5691 | 0.7237 | 0.7231 | | 0.4725 | 41.55 | 8600 | 0.5550 | 0.7321 | 0.7310 | | 0.4741 | 42.51 | 8800 | 0.5622 | 0.7276 | 0.7268 | | 0.4782 | 43.48 | 9000 | 0.5699 | 0.7255 | 0.7250 | | 0.4769 | 44.44 | 9200 | 0.5622 | 0.7260 | 0.7253 | | 0.4748 | 45.41 | 9400 | 0.5583 | 0.7289 | 0.7280 | | 0.4696 | 46.38 | 9600 | 0.5659 | 0.7268 | 0.7262 | | 0.4757 | 47.34 | 9800 | 0.5590 | 0.7283 | 0.7274 | | 0.4715 | 48.31 | 10000 | 0.5565 | 0.7311 | 0.7301 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T07:14:10+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K14ac-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.5218 - F1 Score: 0.7508 - Accuracy: 0.7495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.591 | 0.97 | 200 | 0.5571 | 0.7151 | 0.7132 | | 0.5523 | 1.93 | 400 | 0.5462 | 0.7244 | 0.7225 | | 0.5429 | 2.9 | 600 | 0.5834 | 0.6928 | 0.6941 | | 0.5338 | 3.86 | 800 | 0.5236 | 0.7499 | 0.7483 | | 0.5238 | 4.83 | 1000 | 0.5718 | 0.7138 | 0.7138 | | 0.5197 | 5.8 | 1200 | 0.5510 | 0.7215 | 0.7204 | | 0.5106 | 6.76 | 1400 | 0.5235 | 0.7443 | 0.7425 | | 0.5061 | 7.73 | 1600 | 0.5293 | 0.7424 | 0.7407 | | 0.4987 | 8.7 | 1800 | 0.5519 | 0.7225 | 0.7216 | | 0.4931 | 9.66 | 2000 | 0.5417 | 0.7339 | 0.7325 | | 0.4952 | 10.63 | 2200 | 0.5692 | 0.7228 | 0.7225 | | 0.4803 | 11.59 | 2400 | 0.5238 | 0.7500 | 0.7483 | | 0.4817 | 12.56 | 2600 | 0.5611 | 0.7311 | 0.7301 | | 0.4765 | 13.53 | 2800 | 0.5650 | 0.7246 | 0.7238 | | 0.4737 | 14.49 | 3000 | 0.5579 | 0.7314 | 0.7304 | | 0.4639 | 15.46 | 3200 | 0.5282 | 0.7560 | 0.7543 | | 0.4625 | 16.43 | 3400 | 0.5657 | 0.7300 | 0.7292 | | 0.4589 | 17.39 | 3600 | 0.5313 | 0.7491 | 0.7474 | | 0.4557 | 18.36 | 3800 | 0.5281 | 0.7509 | 0.7492 | | 0.4506 | 19.32 | 4000 | 0.5390 | 0.7505 | 0.7489 | | 0.4489 | 20.29 | 4200 | 0.5549 | 0.7426 | 0.7410 | | 0.4429 | 21.26 | 4400 | 0.5728 | 0.7314 | 0.7304 | | 0.4376 | 22.22 | 4600 | 0.5689 | 0.7389 | 0.7377 | | 0.4364 | 23.19 | 4800 | 0.5565 | 0.7460 | 0.7443 | | 0.4314 | 24.15 | 5000 | 0.5826 | 0.7366 | 0.7352 | | 0.4322 | 25.12 | 5200 | 0.5956 | 0.7316 | 0.7310 | | 0.4272 | 26.09 | 5400 | 0.5889 | 0.7316 | 0.7310 | | 0.4216 | 27.05 | 5600 | 0.6030 | 0.7227 | 0.7222 | | 0.4224 | 28.02 | 5800 | 0.5593 | 0.7408 | 0.7392 | | 0.4186 | 28.99 | 6000 | 0.5638 | 0.7383 | 0.7368 | | 0.4117 | 29.95 | 6200 | 0.5925 | 0.7312 | 0.7298 | | 0.4127 | 30.92 | 6400 | 0.5517 | 0.7535 | 0.7519 | | 0.4127 | 31.88 | 6600 | 0.5605 | 0.7422 | 0.7404 | | 0.4021 | 32.85 | 6800 | 0.6189 | 0.7162 | 0.7159 | | 0.4126 | 33.82 | 7000 | 0.5915 | 0.7305 | 0.7295 | | 0.4044 | 34.78 | 7200 | 0.6099 | 0.7243 | 0.7234 | | 0.4034 | 35.75 | 7400 | 0.5837 | 0.7449 | 0.7431 | | 0.3982 | 36.71 | 7600 | 0.5789 | 0.7379 | 0.7362 | | 0.3992 | 37.68 | 7800 | 0.5947 | 0.7371 | 0.7356 | | 0.3941 | 38.65 | 8000 | 0.5931 | 0.7369 | 0.7352 | | 0.4018 | 39.61 | 8200 | 0.5757 | 0.7373 | 0.7356 | | 0.3907 | 40.58 | 8400 | 0.5994 | 0.7328 | 0.7313 | | 0.3885 | 41.55 | 8600 | 0.5880 | 0.7360 | 0.7343 | | 0.3906 | 42.51 | 8800 | 0.5991 | 0.7352 | 0.7337 | | 0.3922 | 43.48 | 9000 | 0.6040 | 0.7355 | 0.7340 | | 0.3891 | 44.44 | 9200 | 0.5991 | 0.7325 | 0.7310 | | 0.3901 | 45.41 | 9400 | 0.5960 | 0.7353 | 0.7337 | | 0.3827 | 46.38 | 9600 | 0.6006 | 0.7344 | 0.7328 | | 0.3903 | 47.34 | 9800 | 0.5957 | 0.7341 | 0.7325 | | 0.3822 | 48.31 | 10000 | 0.5957 | 0.7360 | 0.7343 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T07:15:04+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me2-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset. It achieves the following results on the evaluation set: - Loss: 0.6102 - F1 Score: 0.6635 - Accuracy: 0.6641 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6594 | 1.04 | 200 | 0.6350 | 0.6278 | 0.6452 | | 0.6279 | 2.08 | 400 | 0.6509 | 0.6156 | 0.6142 | | 0.6215 | 3.12 | 600 | 0.6197 | 0.6498 | 0.6569 | | 0.6198 | 4.17 | 800 | 0.6238 | 0.6528 | 0.6530 | | 0.6164 | 5.21 | 1000 | 0.6351 | 0.6439 | 0.6413 | | 0.6143 | 6.25 | 1200 | 0.6251 | 0.6526 | 0.6514 | | 0.6094 | 7.29 | 1400 | 0.6514 | 0.6389 | 0.6370 | | 0.6118 | 8.33 | 1600 | 0.6291 | 0.6483 | 0.6461 | | 0.6083 | 9.38 | 1800 | 0.6441 | 0.6394 | 0.6370 | | 0.6091 | 10.42 | 2000 | 0.6271 | 0.6558 | 0.6540 | | 0.6093 | 11.46 | 2200 | 0.6177 | 0.6637 | 0.6641 | | 0.6023 | 12.5 | 2400 | 0.6247 | 0.6611 | 0.6598 | | 0.6038 | 13.54 | 2600 | 0.6215 | 0.6641 | 0.6644 | | 0.6036 | 14.58 | 2800 | 0.6186 | 0.6655 | 0.6660 | | 0.6065 | 15.62 | 3000 | 0.6188 | 0.6639 | 0.6644 | | 0.6012 | 16.67 | 3200 | 0.6293 | 0.6601 | 0.6582 | | 0.6019 | 17.71 | 3400 | 0.6146 | 0.6648 | 0.6663 | | 0.6001 | 18.75 | 3600 | 0.6185 | 0.6613 | 0.6608 | | 0.6018 | 19.79 | 3800 | 0.6233 | 0.6602 | 0.6585 | | 0.5952 | 20.83 | 4000 | 0.6271 | 0.6582 | 0.6559 | | 0.6011 | 21.88 | 4200 | 0.6344 | 0.6531 | 0.6507 | | 0.5985 | 22.92 | 4400 | 0.6307 | 0.6550 | 0.6527 | | 0.5985 | 23.96 | 4600 | 0.6302 | 0.6541 | 0.6517 | | 0.597 | 25.0 | 4800 | 0.6205 | 0.6621 | 0.6611 | | 0.5955 | 26.04 | 5000 | 0.6208 | 0.6615 | 0.6601 | | 0.5967 | 27.08 | 5200 | 0.6218 | 0.6590 | 0.6575 | | 0.5962 | 28.12 | 5400 | 0.6185 | 0.6602 | 0.6595 | | 0.5958 | 29.17 | 5600 | 0.6261 | 0.6559 | 0.6536 | | 0.5917 | 30.21 | 5800 | 0.6295 | 0.6586 | 0.6566 | | 0.5958 | 31.25 | 6000 | 0.6255 | 0.6601 | 0.6582 | | 0.594 | 32.29 | 6200 | 0.6265 | 0.6553 | 0.6530 | | 0.5939 | 33.33 | 6400 | 0.6272 | 0.6591 | 0.6569 | | 0.5944 | 34.38 | 6600 | 0.6167 | 0.6595 | 0.6595 | | 0.5914 | 35.42 | 6800 | 0.6168 | 0.6606 | 0.6605 | | 0.5926 | 36.46 | 7000 | 0.6161 | 0.6625 | 0.6621 | | 0.59 | 37.5 | 7200 | 0.6215 | 0.6569 | 0.6553 | | 0.592 | 38.54 | 7400 | 0.6194 | 0.6636 | 0.6628 | | 0.5945 | 39.58 | 7600 | 0.6206 | 0.6614 | 0.6601 | | 0.5938 | 40.62 | 7800 | 0.6278 | 0.6516 | 0.6491 | | 0.5903 | 41.67 | 8000 | 0.6237 | 0.6576 | 0.6556 | | 0.5882 | 42.71 | 8200 | 0.6163 | 0.6654 | 0.6660 | | 0.5929 | 43.75 | 8400 | 0.6207 | 0.6587 | 0.6572 | | 0.59 | 44.79 | 8600 | 0.6260 | 0.6561 | 0.6540 | | 0.589 | 45.83 | 8800 | 0.6206 | 0.6569 | 0.6556 | | 0.592 | 46.88 | 9000 | 0.6254 | 0.6563 | 0.6543 | | 0.5893 | 47.92 | 9200 | 0.6223 | 0.6559 | 0.6543 | | 0.5906 | 48.96 | 9400 | 0.6215 | 0.6571 | 0.6556 | | 0.5891 | 50.0 | 9600 | 0.6219 | 0.6568 | 0.6553 | | 0.5898 | 51.04 | 9800 | 0.6223 | 0.6581 | 0.6566 | | 0.5886 | 52.08 | 10000 | 0.6221 | 0.6581 | 0.6566 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T07:17:21+00:00