pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
text-to-speech
transformers
# VITS Base sw-KE-OpenBible VITS Base sw-KE-OpenBible is an end-to-end text-to-speech model based on the [VITS](https://arxiv.org/abs/2106.06103) architecture. This model was trained from scratch on a real audio dataset. The list of real speakers include: - sw-KE-OpenBible The model's [vocabulary](https://huggingface.co/bookbot/vits-base-sw-KE-OpenBible/blob/main/symbols.py) contains the different IPA phonemes found in [gruut](https://github.com/rhasspy/gruut). This model was trained using [VITS](https://github.com/jaywalnut310/vits) framework. All training was done on a Scaleway L40S VM with a NVIDIA L40S GPU. All necessary scripts used for training could be found in the [Files and versions](https://huggingface.co/bookbot/vits-base-sw-KE-OpenBible/tree/main) tab, as well as the [Training metrics](https://huggingface.co/bookbot/vits-base-sw-KE-OpenBible/tensorboard) logged via Tensorboard. ## Model | Model | SR (Hz) | Mel range (Hz) | FFT / Hop / Win | #epochs | | ------------------------- | ------- | -------------- | ----------------- | ------- | | VITS Base sw-KE-OpenBible | 44.1K | 0-null | 2048 / 512 / 2048 | 12000 | ## Training procedure ### Prepare Data ```sh python preprocess.py \ --text_index 1 \ --filelists filelists/sw-KE-OpenBible_text_train_filelist.txt filelists/sw-KE-OpenBible_text_val_filelist.txt \ --text_cleaners swahili_cleaners ``` ### Train ```sh python train.py -c configs/sw_ke_openbible_base.json -m sw_ke_openbible_base ``` ## Frameworks - PyTorch 2.2.2 - [VITS](https://github.com/bookbot-hive/vits)
{"language": "sw", "license": "cc-by-sa-4.0", "tags": ["audio", "text-to-speech"], "datasets": ["bookbot/OpenBible_Swahili"], "inference": false}
bookbot/vits-base-sw-KE-OpenBible
null
[ "transformers", "tensorboard", "onnx", "audio", "text-to-speech", "sw", "dataset:bookbot/OpenBible_Swahili", "arxiv:2106.06103", "license:cc-by-sa-4.0", "region:us" ]
null
2024-04-27T05:00:25+00:00
[ "2106.06103" ]
[ "sw" ]
TAGS #transformers #tensorboard #onnx #audio #text-to-speech #sw #dataset-bookbot/OpenBible_Swahili #arxiv-2106.06103 #license-cc-by-sa-4.0 #region-us
VITS Base sw-KE-OpenBible ========================= VITS Base sw-KE-OpenBible is an end-to-end text-to-speech model based on the VITS architecture. This model was trained from scratch on a real audio dataset. The list of real speakers include: * sw-KE-OpenBible The model's vocabulary contains the different IPA phonemes found in gruut. This model was trained using VITS framework. All training was done on a Scaleway L40S VM with a NVIDIA L40S GPU. All necessary scripts used for training could be found in the Files and versions tab, as well as the Training metrics logged via Tensorboard. Model ----- Training procedure ------------------ ### Prepare Data ### Train Frameworks ---------- * PyTorch 2.2.2 * VITS
[ "### Prepare Data", "### Train\n\n\nFrameworks\n----------\n\n\n* PyTorch 2.2.2\n* VITS" ]
[ "TAGS\n#transformers #tensorboard #onnx #audio #text-to-speech #sw #dataset-bookbot/OpenBible_Swahili #arxiv-2106.06103 #license-cc-by-sa-4.0 #region-us \n", "### Prepare Data", "### Train\n\n\nFrameworks\n----------\n\n\n* PyTorch 2.2.2\n* VITS" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K36me3-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4528 - F1 Score: 0.8082 - Accuracy: 0.8093 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5504 | 0.92 | 200 | 0.5243 | 0.7520 | 0.7557 | | 0.495 | 1.83 | 400 | 0.4926 | 0.7711 | 0.7735 | | 0.4748 | 2.75 | 600 | 0.4782 | 0.7897 | 0.7907 | | 0.4769 | 3.67 | 800 | 0.4731 | 0.7872 | 0.7884 | | 0.4611 | 4.59 | 1000 | 0.4776 | 0.7901 | 0.7913 | | 0.4548 | 5.5 | 1200 | 0.4617 | 0.7951 | 0.7967 | | 0.4537 | 6.42 | 1400 | 0.4620 | 0.7935 | 0.7944 | | 0.4473 | 7.34 | 1600 | 0.4742 | 0.7835 | 0.7864 | | 0.4438 | 8.26 | 1800 | 0.4641 | 0.7968 | 0.7982 | | 0.4465 | 9.17 | 2000 | 0.4581 | 0.7982 | 0.7996 | | 0.4427 | 10.09 | 2200 | 0.4802 | 0.7861 | 0.7893 | | 0.4354 | 11.01 | 2400 | 0.4584 | 0.7956 | 0.7976 | | 0.4315 | 11.93 | 2600 | 0.4521 | 0.8038 | 0.8048 | | 0.4339 | 12.84 | 2800 | 0.4611 | 0.7950 | 0.7973 | | 0.4278 | 13.76 | 3000 | 0.4766 | 0.7942 | 0.7967 | | 0.4238 | 14.68 | 3200 | 0.4622 | 0.7979 | 0.7993 | | 0.4255 | 15.6 | 3400 | 0.4556 | 0.7987 | 0.8005 | | 0.4231 | 16.51 | 3600 | 0.4720 | 0.7946 | 0.7967 | | 0.4193 | 17.43 | 3800 | 0.4731 | 0.7974 | 0.7996 | | 0.4162 | 18.35 | 4000 | 0.4612 | 0.7973 | 0.7990 | | 0.4174 | 19.27 | 4200 | 0.4681 | 0.7951 | 0.7970 | | 0.4169 | 20.18 | 4400 | 0.4799 | 0.7926 | 0.7953 | | 0.4089 | 21.1 | 4600 | 0.4730 | 0.7968 | 0.7987 | | 0.4104 | 22.02 | 4800 | 0.4677 | 0.7988 | 0.8005 | | 0.4079 | 22.94 | 5000 | 0.4624 | 0.7994 | 0.8010 | | 0.4058 | 23.85 | 5200 | 0.4611 | 0.7986 | 0.8005 | | 0.4021 | 24.77 | 5400 | 0.4847 | 0.7924 | 0.7953 | | 0.4003 | 25.69 | 5600 | 0.4651 | 0.7992 | 0.8010 | | 0.4027 | 26.61 | 5800 | 0.4618 | 0.8017 | 0.8030 | | 0.403 | 27.52 | 6000 | 0.4911 | 0.7939 | 0.7962 | | 0.3979 | 28.44 | 6200 | 0.4624 | 0.7982 | 0.8002 | | 0.3955 | 29.36 | 6400 | 0.4697 | 0.8001 | 0.8022 | | 0.3953 | 30.28 | 6600 | 0.4730 | 0.7960 | 0.7982 | | 0.3967 | 31.19 | 6800 | 0.4697 | 0.7971 | 0.7987 | | 0.3944 | 32.11 | 7000 | 0.4696 | 0.7996 | 0.8016 | | 0.3904 | 33.03 | 7200 | 0.4674 | 0.8012 | 0.8028 | | 0.3889 | 33.94 | 7400 | 0.4709 | 0.7990 | 0.8007 | | 0.3909 | 34.86 | 7600 | 0.4703 | 0.7995 | 0.8013 | | 0.3881 | 35.78 | 7800 | 0.4676 | 0.7993 | 0.8007 | | 0.3898 | 36.7 | 8000 | 0.4687 | 0.7954 | 0.7973 | | 0.3871 | 37.61 | 8200 | 0.4815 | 0.7948 | 0.7976 | | 0.3835 | 38.53 | 8400 | 0.4772 | 0.7976 | 0.7996 | | 0.3864 | 39.45 | 8600 | 0.4755 | 0.7975 | 0.7993 | | 0.3838 | 40.37 | 8800 | 0.4882 | 0.7940 | 0.7964 | | 0.3855 | 41.28 | 9000 | 0.4740 | 0.7971 | 0.7990 | | 0.3826 | 42.2 | 9200 | 0.4754 | 0.7984 | 0.8002 | | 0.3785 | 43.12 | 9400 | 0.4802 | 0.7988 | 0.8005 | | 0.3831 | 44.04 | 9600 | 0.4778 | 0.7976 | 0.7996 | | 0.38 | 44.95 | 9800 | 0.4802 | 0.7957 | 0.7979 | | 0.3821 | 45.87 | 10000 | 0.4787 | 0.7986 | 0.8005 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:05:40+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_EMP\_H3K36me3-seqsight\_8192\_512\_30M-L8\_f ================================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K36me3 dataset. It achieves the following results on the evaluation set: * Loss: 0.4528 * F1 Score: 0.8082 * Accuracy: 0.8093 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/8suk5so
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:05:41+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/2fk4b8i
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:05:41+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/l60p7h9
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:05:41+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/t3sx545
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:05:41+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/ho6vhk0
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:05:41+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/jwbsvvq
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:05:43+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Aju020/fine-tuned-QA
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:08:29+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K36me3-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4416 - F1 Score: 0.8078 - Accuracy: 0.8091 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5397 | 0.92 | 200 | 0.5132 | 0.7725 | 0.7755 | | 0.4784 | 1.83 | 400 | 0.4743 | 0.7906 | 0.7921 | | 0.4617 | 2.75 | 600 | 0.4669 | 0.7920 | 0.7933 | | 0.4644 | 3.67 | 800 | 0.4588 | 0.7956 | 0.7967 | | 0.4481 | 4.59 | 1000 | 0.4657 | 0.7914 | 0.7930 | | 0.4384 | 5.5 | 1200 | 0.4599 | 0.7972 | 0.7993 | | 0.4356 | 6.42 | 1400 | 0.4646 | 0.7983 | 0.7996 | | 0.432 | 7.34 | 1600 | 0.4629 | 0.7906 | 0.7936 | | 0.423 | 8.26 | 1800 | 0.4520 | 0.8027 | 0.8045 | | 0.4248 | 9.17 | 2000 | 0.4585 | 0.7986 | 0.8005 | | 0.4159 | 10.09 | 2200 | 0.4908 | 0.7898 | 0.7930 | | 0.4081 | 11.01 | 2400 | 0.4625 | 0.8008 | 0.8022 | | 0.4021 | 11.93 | 2600 | 0.4465 | 0.8056 | 0.8065 | | 0.401 | 12.84 | 2800 | 0.4689 | 0.7944 | 0.7970 | | 0.3912 | 13.76 | 3000 | 0.4865 | 0.7892 | 0.7924 | | 0.3842 | 14.68 | 3200 | 0.4810 | 0.7979 | 0.7996 | | 0.3848 | 15.6 | 3400 | 0.4648 | 0.8048 | 0.8062 | | 0.3793 | 16.51 | 3600 | 0.4945 | 0.7921 | 0.7953 | | 0.3715 | 17.43 | 3800 | 0.5056 | 0.7894 | 0.7924 | | 0.3643 | 18.35 | 4000 | 0.4799 | 0.7921 | 0.7933 | | 0.3643 | 19.27 | 4200 | 0.5064 | 0.7943 | 0.7967 | | 0.3585 | 20.18 | 4400 | 0.5221 | 0.7948 | 0.7967 | | 0.3478 | 21.1 | 4600 | 0.5012 | 0.7999 | 0.8013 | | 0.3482 | 22.02 | 4800 | 0.4800 | 0.8000 | 0.8013 | | 0.3427 | 22.94 | 5000 | 0.4995 | 0.7917 | 0.7936 | | 0.336 | 23.85 | 5200 | 0.5136 | 0.7859 | 0.7887 | | 0.3316 | 24.77 | 5400 | 0.5251 | 0.7890 | 0.7916 | | 0.3233 | 25.69 | 5600 | 0.5280 | 0.7936 | 0.7953 | | 0.3278 | 26.61 | 5800 | 0.5122 | 0.7953 | 0.7967 | | 0.3214 | 27.52 | 6000 | 0.5402 | 0.7933 | 0.7953 | | 0.3166 | 28.44 | 6200 | 0.5342 | 0.7893 | 0.7910 | | 0.3119 | 29.36 | 6400 | 0.5471 | 0.7800 | 0.7833 | | 0.31 | 30.28 | 6600 | 0.5697 | 0.7820 | 0.7850 | | 0.3068 | 31.19 | 6800 | 0.5411 | 0.7872 | 0.7890 | | 0.2998 | 32.11 | 7000 | 0.5673 | 0.7887 | 0.7910 | | 0.298 | 33.03 | 7200 | 0.5327 | 0.7891 | 0.7907 | | 0.2924 | 33.94 | 7400 | 0.5371 | 0.7892 | 0.7907 | | 0.2926 | 34.86 | 7600 | 0.5581 | 0.7880 | 0.7899 | | 0.2896 | 35.78 | 7800 | 0.5511 | 0.7881 | 0.7893 | | 0.2879 | 36.7 | 8000 | 0.5621 | 0.7792 | 0.7815 | | 0.2847 | 37.61 | 8200 | 0.5863 | 0.7802 | 0.7827 | | 0.2811 | 38.53 | 8400 | 0.5956 | 0.7816 | 0.7844 | | 0.2809 | 39.45 | 8600 | 0.5839 | 0.7846 | 0.7867 | | 0.2782 | 40.37 | 8800 | 0.6085 | 0.7850 | 0.7876 | | 0.2746 | 41.28 | 9000 | 0.5868 | 0.7793 | 0.7818 | | 0.2754 | 42.2 | 9200 | 0.5840 | 0.7823 | 0.7844 | | 0.2705 | 43.12 | 9400 | 0.5863 | 0.7822 | 0.7841 | | 0.271 | 44.04 | 9600 | 0.5937 | 0.7814 | 0.7838 | | 0.2689 | 44.95 | 9800 | 0.5956 | 0.7805 | 0.7830 | | 0.267 | 45.87 | 10000 | 0.5955 | 0.7824 | 0.7847 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:08:34+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_EMP\_H3K36me3-seqsight\_8192\_512\_30M-L32\_f ================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K36me3 dataset. It achieves the following results on the evaluation set: * Loss: 0.4416 * F1 Score: 0.8078 * Accuracy: 0.8091 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-food101 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 0.6105 - Accuracy: 0.8400 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:| | 4.1344 | 0.0248 | 100 | 4.0304 | 0.3063 | | 3.5328 | 0.0497 | 200 | 3.3729 | 0.4410 | | 2.9715 | 0.0745 | 300 | 2.8900 | 0.5135 | | 2.724 | 0.0994 | 400 | 2.5096 | 0.5443 | | 2.311 | 0.1242 | 500 | 2.1726 | 0.5895 | | 2.266 | 0.1491 | 600 | 2.0223 | 0.5880 | | 1.9671 | 0.1739 | 700 | 1.7585 | 0.6330 | | 1.8617 | 0.1988 | 800 | 1.7300 | 0.6212 | | 1.4694 | 0.2236 | 900 | 1.7507 | 0.6078 | | 1.7876 | 0.2484 | 1000 | 1.6520 | 0.6133 | | 1.7647 | 0.2733 | 1100 | 1.4576 | 0.6598 | | 1.7 | 0.2981 | 1200 | 1.4420 | 0.6577 | | 1.533 | 0.3230 | 1300 | 1.4389 | 0.6537 | | 1.3895 | 0.3478 | 1400 | 1.4178 | 0.6587 | | 1.5497 | 0.3727 | 1500 | 1.3048 | 0.6861 | | 1.3327 | 0.3975 | 1600 | 1.3361 | 0.6714 | | 1.53 | 0.4224 | 1700 | 1.3425 | 0.6697 | | 1.538 | 0.4472 | 1800 | 1.3453 | 0.6642 | | 1.5056 | 0.4720 | 1900 | 1.2742 | 0.6783 | | 1.2728 | 0.4969 | 2000 | 1.1779 | 0.7045 | | 1.1734 | 0.5217 | 2100 | 1.2630 | 0.6808 | | 1.527 | 0.5466 | 2200 | 1.1810 | 0.7023 | | 1.3873 | 0.5714 | 2300 | 1.1831 | 0.7040 | | 1.3545 | 0.5963 | 2400 | 1.1836 | 0.7002 | | 1.4842 | 0.6211 | 2500 | 1.1441 | 0.7129 | | 1.1974 | 0.6460 | 2600 | 1.1230 | 0.7155 | | 1.4204 | 0.6708 | 2700 | 1.1766 | 0.7002 | | 1.152 | 0.6957 | 2800 | 1.2166 | 0.6950 | | 1.162 | 0.7205 | 2900 | 1.1674 | 0.7003 | | 1.4516 | 0.7453 | 3000 | 1.1207 | 0.7140 | | 1.2378 | 0.7702 | 3100 | 1.2072 | 0.6906 | | 0.991 | 0.7950 | 3200 | 1.1122 | 0.7131 | | 1.3078 | 0.8199 | 3300 | 1.1207 | 0.7170 | | 1.1483 | 0.8447 | 3400 | 1.0665 | 0.7245 | | 1.453 | 0.8696 | 3500 | 1.0640 | 0.7267 | | 1.4457 | 0.8944 | 3600 | 1.0565 | 0.7321 | | 1.1636 | 0.9193 | 3700 | 1.0576 | 0.7255 | | 1.157 | 0.9441 | 3800 | 1.0648 | 0.7261 | | 1.1923 | 0.9689 | 3900 | 1.0473 | 0.7271 | | 1.2325 | 0.9938 | 4000 | 1.0501 | 0.7298 | | 1.1503 | 1.0186 | 4100 | 1.0566 | 0.7243 | | 1.0633 | 1.0435 | 4200 | 1.0005 | 0.7444 | | 1.2061 | 1.0683 | 4300 | 1.0196 | 0.7377 | | 1.0315 | 1.0932 | 4400 | 1.0139 | 0.7392 | | 1.038 | 1.1180 | 4500 | 1.0299 | 0.7318 | | 0.7728 | 1.1429 | 4600 | 1.0522 | 0.7257 | | 0.9302 | 1.1677 | 4700 | 1.0219 | 0.7362 | | 1.1084 | 1.1925 | 4800 | 0.9940 | 0.7349 | | 1.0345 | 1.2174 | 4900 | 0.9775 | 0.7446 | | 1.0541 | 1.2422 | 5000 | 1.0076 | 0.7366 | | 0.9345 | 1.2671 | 5100 | 1.0075 | 0.7398 | | 0.9149 | 1.2919 | 5200 | 1.0558 | 0.7261 | | 1.2583 | 1.3168 | 5300 | 0.9703 | 0.7476 | | 1.0745 | 1.3416 | 5400 | 0.9902 | 0.7425 | | 0.8319 | 1.3665 | 5500 | 0.9442 | 0.7553 | | 1.1286 | 1.3913 | 5600 | 0.9620 | 0.7532 | | 0.8228 | 1.4161 | 5700 | 0.9329 | 0.7555 | | 1.3209 | 1.4410 | 5800 | 0.9402 | 0.7543 | | 0.7629 | 1.4658 | 5900 | 0.9497 | 0.7547 | | 0.9906 | 1.4907 | 6000 | 0.9362 | 0.7589 | | 0.9966 | 1.5155 | 6100 | 0.9322 | 0.7595 | | 0.8868 | 1.5404 | 6200 | 0.9613 | 0.7506 | | 0.956 | 1.5652 | 6300 | 0.9370 | 0.7568 | | 1.1833 | 1.5901 | 6400 | 0.9277 | 0.7597 | | 0.9747 | 1.6149 | 6500 | 0.8777 | 0.7696 | | 1.0119 | 1.6398 | 6600 | 0.8980 | 0.7653 | | 0.9764 | 1.6646 | 6700 | 0.9071 | 0.7641 | | 1.0528 | 1.6894 | 6800 | 0.8941 | 0.7694 | | 0.942 | 1.7143 | 6900 | 0.8718 | 0.7737 | | 1.0387 | 1.7391 | 7000 | 0.8615 | 0.7787 | | 0.9054 | 1.7640 | 7100 | 0.8689 | 0.7735 | | 1.0327 | 1.7888 | 7200 | 0.8953 | 0.7692 | | 0.8425 | 1.8137 | 7300 | 0.8533 | 0.7761 | | 0.9388 | 1.8385 | 7400 | 0.8772 | 0.7687 | | 1.1037 | 1.8634 | 7500 | 0.8634 | 0.7731 | | 0.9659 | 1.8882 | 7600 | 0.8502 | 0.7766 | | 1.0133 | 1.9130 | 7700 | 0.8479 | 0.7766 | | 0.8395 | 1.9379 | 7800 | 0.8052 | 0.7889 | | 0.8803 | 1.9627 | 7900 | 0.8379 | 0.7775 | | 0.7866 | 1.9876 | 8000 | 0.8283 | 0.7835 | | 0.5067 | 2.0124 | 8100 | 0.8207 | 0.7835 | | 0.7083 | 2.0373 | 8200 | 0.8320 | 0.7803 | | 0.6581 | 2.0621 | 8300 | 0.8162 | 0.7869 | | 0.7376 | 2.0870 | 8400 | 0.8222 | 0.7871 | | 0.6492 | 2.1118 | 8500 | 0.8153 | 0.7868 | | 0.6356 | 2.1366 | 8600 | 0.7930 | 0.7929 | | 0.7626 | 2.1615 | 8700 | 0.8167 | 0.7874 | | 0.7389 | 2.1863 | 8800 | 0.8076 | 0.7889 | | 0.503 | 2.2112 | 8900 | 0.8312 | 0.7869 | | 0.7901 | 2.2360 | 9000 | 0.8137 | 0.7900 | | 0.8387 | 2.2609 | 9100 | 0.8207 | 0.7832 | | 0.7048 | 2.2857 | 9200 | 0.8105 | 0.7898 | | 0.6412 | 2.3106 | 9300 | 0.7829 | 0.7950 | | 0.6864 | 2.3354 | 9400 | 0.7851 | 0.7941 | | 0.7411 | 2.3602 | 9500 | 0.7642 | 0.8031 | | 0.6221 | 2.3851 | 9600 | 0.7603 | 0.8030 | | 0.7769 | 2.4099 | 9700 | 0.7846 | 0.7975 | | 0.7939 | 2.4348 | 9800 | 0.7914 | 0.7933 | | 0.5641 | 2.4596 | 9900 | 0.7700 | 0.7992 | | 0.8009 | 2.4845 | 10000 | 0.7699 | 0.8015 | | 0.6111 | 2.5093 | 10100 | 0.7603 | 0.8036 | | 0.925 | 2.5342 | 10200 | 0.7727 | 0.8003 | | 0.6206 | 2.5590 | 10300 | 0.7765 | 0.7984 | | 0.5977 | 2.5839 | 10400 | 0.7793 | 0.7960 | | 0.8146 | 2.6087 | 10500 | 0.7799 | 0.7978 | | 0.7869 | 2.6335 | 10600 | 0.7396 | 0.8087 | | 0.8966 | 2.6584 | 10700 | 0.7386 | 0.8071 | | 0.6654 | 2.6832 | 10800 | 0.7305 | 0.8103 | | 0.737 | 2.7081 | 10900 | 0.7317 | 0.8083 | | 0.9283 | 2.7329 | 11000 | 0.7409 | 0.8072 | | 0.7491 | 2.7578 | 11100 | 0.7088 | 0.8153 | | 0.6807 | 2.7826 | 11200 | 0.7154 | 0.8123 | | 0.4485 | 2.8075 | 11300 | 0.6985 | 0.8180 | | 0.6694 | 2.8323 | 11400 | 0.7124 | 0.8147 | | 0.6661 | 2.8571 | 11500 | 0.7075 | 0.8153 | | 0.7971 | 2.8820 | 11600 | 0.7375 | 0.8078 | | 0.9771 | 2.9068 | 11700 | 0.7133 | 0.8133 | | 0.5238 | 2.9317 | 11800 | 0.7077 | 0.8157 | | 0.5636 | 2.9565 | 11900 | 0.7419 | 0.8030 | | 0.8962 | 2.9814 | 12000 | 0.7021 | 0.8175 | | 0.4561 | 3.0062 | 12100 | 0.7031 | 0.8162 | | 0.4906 | 3.0311 | 12200 | 0.7104 | 0.8171 | | 0.5422 | 3.0559 | 12300 | 0.7035 | 0.8154 | | 0.5541 | 3.0807 | 12400 | 0.6905 | 0.8232 | | 0.5009 | 3.1056 | 12500 | 0.6994 | 0.8173 | | 0.4567 | 3.1304 | 12600 | 0.6911 | 0.8203 | | 0.4431 | 3.1553 | 12700 | 0.6933 | 0.8192 | | 0.5915 | 3.1801 | 12800 | 0.6838 | 0.8221 | | 0.5551 | 3.2050 | 12900 | 0.6886 | 0.8199 | | 0.4528 | 3.2298 | 13000 | 0.6883 | 0.8212 | | 0.5563 | 3.2547 | 13100 | 0.6867 | 0.8192 | | 0.4836 | 3.2795 | 13200 | 0.6771 | 0.8253 | | 0.4535 | 3.3043 | 13300 | 0.6713 | 0.8249 | | 0.468 | 3.3292 | 13400 | 0.6616 | 0.8270 | | 0.4691 | 3.3540 | 13500 | 0.6707 | 0.8261 | | 0.4784 | 3.3789 | 13600 | 0.6733 | 0.8241 | | 0.5187 | 3.4037 | 13700 | 0.6658 | 0.8251 | | 0.5105 | 3.4286 | 13800 | 0.6631 | 0.8275 | | 0.3935 | 3.4534 | 13900 | 0.6656 | 0.8283 | | 0.463 | 3.4783 | 14000 | 0.6554 | 0.8301 | | 0.3259 | 3.5031 | 14100 | 0.6640 | 0.8292 | | 0.7286 | 3.5280 | 14200 | 0.6500 | 0.8308 | | 0.4422 | 3.5528 | 14300 | 0.6540 | 0.8313 | | 0.4374 | 3.5776 | 14400 | 0.6497 | 0.8317 | | 0.7962 | 3.6025 | 14500 | 0.6416 | 0.8340 | | 0.6297 | 3.6273 | 14600 | 0.6393 | 0.8339 | | 0.4933 | 3.6522 | 14700 | 0.6379 | 0.8336 | | 0.5548 | 3.6770 | 14800 | 0.6300 | 0.8356 | | 0.564 | 3.7019 | 14900 | 0.6284 | 0.8352 | | 0.2638 | 3.7267 | 15000 | 0.6299 | 0.8338 | | 0.6129 | 3.7516 | 15100 | 0.6253 | 0.8374 | | 0.51 | 3.7764 | 15200 | 0.6205 | 0.8390 | | 0.4612 | 3.8012 | 15300 | 0.6165 | 0.8390 | | 0.5304 | 3.8261 | 15400 | 0.6112 | 0.8412 | | 0.4738 | 3.8509 | 15500 | 0.6149 | 0.8388 | | 0.3845 | 3.8758 | 15600 | 0.6141 | 0.8391 | | 0.4533 | 3.9006 | 15700 | 0.6139 | 0.8399 | | 0.3539 | 3.9255 | 15800 | 0.6131 | 0.8402 | | 0.6485 | 3.9503 | 15900 | 0.6118 | 0.8397 | | 0.331 | 3.9752 | 16000 | 0.6108 | 0.8397 | | 0.3582 | 4.0 | 16100 | 0.6105 | 0.8400 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["image-classification", "food-ingredient-classification", "food101", "food101-finetuned", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "finetuned-food101", "results": []}]}
ericmconnelly/finetuned-food101
null
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "food-ingredient-classification", "food101", "food101-finetuned", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:09:30+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vit #image-classification #food-ingredient-classification #food101 #food101-finetuned #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
finetuned-food101 ================= This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the food101 dataset. It achieves the following results on the evaluation set: * Loss: 0.6105 * Accuracy: 0.8400 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 4 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #food-ingredient-classification #food101 #food101-finetuned #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Aaron82352/length_generalization_testing
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:10:41+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_0-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.6032 - F1 Score: 0.7332 - Accuracy: 0.7333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6306 | 3.92 | 200 | 0.5759 | 0.6920 | 0.6926 | | 0.5587 | 7.84 | 400 | 0.5459 | 0.7346 | 0.7346 | | 0.524 | 11.76 | 600 | 0.5219 | 0.7490 | 0.7494 | | 0.4993 | 15.69 | 800 | 0.5213 | 0.7433 | 0.7469 | | 0.4824 | 19.61 | 1000 | 0.5078 | 0.7651 | 0.7654 | | 0.4644 | 23.53 | 1200 | 0.5297 | 0.7406 | 0.7444 | | 0.4475 | 27.45 | 1400 | 0.5143 | 0.7650 | 0.7654 | | 0.4334 | 31.37 | 1600 | 0.5257 | 0.7593 | 0.7593 | | 0.4156 | 35.29 | 1800 | 0.5306 | 0.7616 | 0.7617 | | 0.4 | 39.22 | 2000 | 0.5502 | 0.7629 | 0.7630 | | 0.3899 | 43.14 | 2200 | 0.5512 | 0.7752 | 0.7753 | | 0.3778 | 47.06 | 2400 | 0.5614 | 0.7612 | 0.7617 | | 0.3596 | 50.98 | 2600 | 0.6174 | 0.7587 | 0.7593 | | 0.3538 | 54.9 | 2800 | 0.5910 | 0.7521 | 0.7531 | | 0.3416 | 58.82 | 3000 | 0.6229 | 0.7593 | 0.7593 | | 0.3294 | 62.75 | 3200 | 0.6087 | 0.7652 | 0.7654 | | 0.3217 | 66.67 | 3400 | 0.6179 | 0.7664 | 0.7667 | | 0.3095 | 70.59 | 3600 | 0.6788 | 0.7593 | 0.7593 | | 0.2974 | 74.51 | 3800 | 0.6854 | 0.7510 | 0.7519 | | 0.286 | 78.43 | 4000 | 0.6915 | 0.7564 | 0.7568 | | 0.279 | 82.35 | 4200 | 0.7428 | 0.7630 | 0.7630 | | 0.2706 | 86.27 | 4400 | 0.7287 | 0.7665 | 0.7667 | | 0.2634 | 90.2 | 4600 | 0.7211 | 0.7528 | 0.7531 | | 0.2573 | 94.12 | 4800 | 0.7345 | 0.7628 | 0.7630 | | 0.2504 | 98.04 | 5000 | 0.7398 | 0.7599 | 0.7605 | | 0.2383 | 101.96 | 5200 | 0.7890 | 0.7544 | 0.7543 | | 0.2385 | 105.88 | 5400 | 0.7732 | 0.7482 | 0.7481 | | 0.2276 | 109.8 | 5600 | 0.8023 | 0.7556 | 0.7556 | | 0.2271 | 113.73 | 5800 | 0.7904 | 0.7587 | 0.7593 | | 0.2251 | 117.65 | 6000 | 0.8021 | 0.7555 | 0.7556 | | 0.2163 | 121.57 | 6200 | 0.8689 | 0.7469 | 0.7469 | | 0.2135 | 125.49 | 6400 | 0.8869 | 0.7432 | 0.7432 | | 0.2045 | 129.41 | 6600 | 0.9004 | 0.7445 | 0.7444 | | 0.2038 | 133.33 | 6800 | 0.8614 | 0.7456 | 0.7457 | | 0.2045 | 137.25 | 7000 | 0.8644 | 0.7568 | 0.7568 | | 0.1986 | 141.18 | 7200 | 0.8741 | 0.7568 | 0.7568 | | 0.1924 | 145.1 | 7400 | 0.8985 | 0.7455 | 0.7457 | | 0.1941 | 149.02 | 7600 | 0.9052 | 0.7482 | 0.7481 | | 0.1938 | 152.94 | 7800 | 0.8921 | 0.7467 | 0.7469 | | 0.1896 | 156.86 | 8000 | 0.9117 | 0.7430 | 0.7432 | | 0.1822 | 160.78 | 8200 | 0.9299 | 0.7432 | 0.7432 | | 0.1812 | 164.71 | 8400 | 0.9327 | 0.7531 | 0.7531 | | 0.1882 | 168.63 | 8600 | 0.9083 | 0.7420 | 0.7420 | | 0.1805 | 172.55 | 8800 | 0.9239 | 0.7482 | 0.7481 | | 0.1764 | 176.47 | 9000 | 0.9368 | 0.7494 | 0.7494 | | 0.1778 | 180.39 | 9200 | 0.9469 | 0.7519 | 0.7519 | | 0.173 | 184.31 | 9400 | 0.9455 | 0.7457 | 0.7457 | | 0.174 | 188.24 | 9600 | 0.9456 | 0.7470 | 0.7469 | | 0.1723 | 192.16 | 9800 | 0.9487 | 0.7482 | 0.7481 | | 0.1772 | 196.08 | 10000 | 0.9479 | 0.7469 | 0.7469 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_0-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_0-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:12:44+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_mouse\_0-seqsight\_8192\_512\_30M-L8\_f ============================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset. It achieves the following results on the evaluation set: * Loss: 0.6032 * F1 Score: 0.7332 * Accuracy: 0.7333 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_0-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.5799 - F1 Score: 0.7317 - Accuracy: 0.7321 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6499 | 3.92 | 200 | 0.6000 | 0.6521 | 0.6556 | | 0.5979 | 7.84 | 400 | 0.5797 | 0.6883 | 0.6889 | | 0.5761 | 11.76 | 600 | 0.5590 | 0.7086 | 0.7086 | | 0.5595 | 15.69 | 800 | 0.5484 | 0.7281 | 0.7284 | | 0.5452 | 19.61 | 1000 | 0.5436 | 0.7247 | 0.7247 | | 0.5349 | 23.53 | 1200 | 0.5486 | 0.7225 | 0.7284 | | 0.5213 | 27.45 | 1400 | 0.5208 | 0.7467 | 0.7469 | | 0.5139 | 31.37 | 1600 | 0.5157 | 0.7528 | 0.7531 | | 0.5016 | 35.29 | 1800 | 0.5107 | 0.7578 | 0.7580 | | 0.4946 | 39.22 | 2000 | 0.5147 | 0.7518 | 0.7519 | | 0.4891 | 43.14 | 2200 | 0.5051 | 0.7629 | 0.7630 | | 0.4845 | 47.06 | 2400 | 0.5063 | 0.7593 | 0.7593 | | 0.4786 | 50.98 | 2600 | 0.5183 | 0.7564 | 0.7568 | | 0.4707 | 54.9 | 2800 | 0.5015 | 0.7582 | 0.7593 | | 0.4689 | 58.82 | 3000 | 0.5044 | 0.7640 | 0.7642 | | 0.4638 | 62.75 | 3200 | 0.4977 | 0.7660 | 0.7667 | | 0.4597 | 66.67 | 3400 | 0.5005 | 0.7640 | 0.7642 | | 0.46 | 70.59 | 3600 | 0.5013 | 0.7629 | 0.7630 | | 0.4543 | 74.51 | 3800 | 0.5016 | 0.7613 | 0.7617 | | 0.4488 | 78.43 | 4000 | 0.5016 | 0.7595 | 0.7605 | | 0.4468 | 82.35 | 4200 | 0.5019 | 0.7611 | 0.7617 | | 0.4416 | 86.27 | 4400 | 0.5146 | 0.7655 | 0.7654 | | 0.4443 | 90.2 | 4600 | 0.5032 | 0.7619 | 0.7630 | | 0.4386 | 94.12 | 4800 | 0.5068 | 0.7616 | 0.7617 | | 0.4377 | 98.04 | 5000 | 0.5030 | 0.7658 | 0.7667 | | 0.4332 | 101.96 | 5200 | 0.5148 | 0.7667 | 0.7667 | | 0.429 | 105.88 | 5400 | 0.5096 | 0.7603 | 0.7605 | | 0.43 | 109.8 | 5600 | 0.5135 | 0.7618 | 0.7617 | | 0.4269 | 113.73 | 5800 | 0.5132 | 0.7639 | 0.7642 | | 0.4278 | 117.65 | 6000 | 0.5193 | 0.7581 | 0.7580 | | 0.4235 | 121.57 | 6200 | 0.5165 | 0.7677 | 0.7679 | | 0.4246 | 125.49 | 6400 | 0.5134 | 0.7676 | 0.7679 | | 0.4193 | 129.41 | 6600 | 0.5175 | 0.7605 | 0.7605 | | 0.4188 | 133.33 | 6800 | 0.5150 | 0.7665 | 0.7667 | | 0.4207 | 137.25 | 7000 | 0.5140 | 0.7700 | 0.7704 | | 0.417 | 141.18 | 7200 | 0.5174 | 0.7713 | 0.7716 | | 0.4105 | 145.1 | 7400 | 0.5207 | 0.7664 | 0.7667 | | 0.4136 | 149.02 | 7600 | 0.5199 | 0.7653 | 0.7654 | | 0.416 | 152.94 | 7800 | 0.5139 | 0.7724 | 0.7728 | | 0.4132 | 156.86 | 8000 | 0.5164 | 0.7686 | 0.7691 | | 0.4086 | 160.78 | 8200 | 0.5218 | 0.7701 | 0.7704 | | 0.4089 | 164.71 | 8400 | 0.5229 | 0.7677 | 0.7679 | | 0.4116 | 168.63 | 8600 | 0.5170 | 0.7688 | 0.7691 | | 0.4085 | 172.55 | 8800 | 0.5201 | 0.7724 | 0.7728 | | 0.4071 | 176.47 | 9000 | 0.5198 | 0.7713 | 0.7716 | | 0.4071 | 180.39 | 9200 | 0.5193 | 0.7712 | 0.7716 | | 0.4024 | 184.31 | 9400 | 0.5221 | 0.7726 | 0.7728 | | 0.4033 | 188.24 | 9600 | 0.5230 | 0.7726 | 0.7728 | | 0.4081 | 192.16 | 9800 | 0.5206 | 0.7726 | 0.7728 | | 0.4032 | 196.08 | 10000 | 0.5208 | 0.7738 | 0.7741 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_0-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_0-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:12:44+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_mouse\_0-seqsight\_8192\_512\_30M-L1\_f ============================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset. It achieves the following results on the evaluation set: * Loss: 0.5799 * F1 Score: 0.7317 * Accuracy: 0.7321 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Microllama-300.500kmerge Microllama-300.500kmerge is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Corianas/Microllama_Char_500k_step](https://huggingface.co/Corianas/Microllama_Char_500k_step) * [Corianas/Microllama_Char_300k_step](https://huggingface.co/Corianas/Microllama_Char_300k_step) ## 🧩 Configuration ```yaml slices: - sources: - model: Corianas/Microllama_Char_500k_step layer_range: [0, 12] - model: Corianas/Microllama_Char_300k_step layer_range: [0, 12] merge_method: slerp base_model: Corianas/Microllama_Char_300k_step parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Corianas/Microllama-300.500kmerge" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"tags": ["merge", "mergekit", "lazymergekit", "Corianas/Microllama_Char_500k_step", "Corianas/Microllama_Char_300k_step"], "base_model": ["Corianas/Microllama_Char_500k_step", "Corianas/Microllama_Char_300k_step"]}
Corianas/Microllama-300.500kmerge
null
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "Corianas/Microllama_Char_500k_step", "Corianas/Microllama_Char_300k_step", "base_model:Corianas/Microllama_Char_500k_step", "base_model:Corianas/Microllama_Char_300k_step", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T05:13:04+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #Corianas/Microllama_Char_500k_step #Corianas/Microllama_Char_300k_step #base_model-Corianas/Microllama_Char_500k_step #base_model-Corianas/Microllama_Char_300k_step #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Microllama-300.500kmerge Microllama-300.500kmerge is a merge of the following models using LazyMergekit: * Corianas/Microllama_Char_500k_step * Corianas/Microllama_Char_300k_step ## Configuration ## Usage
[ "# Microllama-300.500kmerge\n\nMicrollama-300.500kmerge is a merge of the following models using LazyMergekit:\n* Corianas/Microllama_Char_500k_step\n* Corianas/Microllama_Char_300k_step", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #Corianas/Microllama_Char_500k_step #Corianas/Microllama_Char_300k_step #base_model-Corianas/Microllama_Char_500k_step #base_model-Corianas/Microllama_Char_300k_step #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Microllama-300.500kmerge\n\nMicrollama-300.500kmerge is a merge of the following models using LazyMergekit:\n* Corianas/Microllama_Char_500k_step\n* Corianas/Microllama_Char_300k_step", "## Configuration", "## Usage" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/adapter-llama-adapterhappy2sad-study-50-0.006
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:13:04+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_4iters_bs256_nodpo_only4w_zephyr_iter_2 This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1](https://huggingface.co/ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
{"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1", "model-index": [{"name": "0.001_4iters_bs256_nodpo_only4w_zephyr_iter_2", "results": []}]}
ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_zephyr_iter_2
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T05:17:58+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.001_4iters_bs256_nodpo_only4w_zephyr_iter_2 This model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1 on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
[ "# 0.001_4iters_bs256_nodpo_only4w_zephyr_iter_2\n\nThis model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.001_4iters_bs256_nodpo_only4w_zephyr_iter_2\n\nThis model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
himanshubeniwal/mt5-base-finetuned-kk-to-en-filthy-Indian
null
[ "transformers", "safetensors", "mt5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T05:18:16+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mt5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mt5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# output-model-directory This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * /workspace/sigrid-llm-lab/layer_locked_raw_sk * /workspace/sigrid-llm-lab/sigrid-llm-lab/sigrid-llm-lab/layer_locked_inst ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: /workspace/sigrid-llm-lab/layer_locked_raw_sk layer_range: [0, 15] - sources: - model: /workspace/sigrid-llm-lab/sigrid-llm-lab/sigrid-llm-lab/layer_locked_inst layer_range: [16, 17] merge_method: passthrough dtype: float16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": []}
sigridjineth/gemma-2b-var
null
[ "transformers", "safetensors", "gemma", "text-generation", "mergekit", "merge", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T05:19:12+00:00
[]
[]
TAGS #transformers #safetensors #gemma #text-generation #mergekit #merge #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# output-model-directory This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * /workspace/sigrid-llm-lab/layer_locked_raw_sk * /workspace/sigrid-llm-lab/sigrid-llm-lab/sigrid-llm-lab/layer_locked_inst ### Configuration The following YAML configuration was used to produce this model:
[ "# output-model-directory\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* /workspace/sigrid-llm-lab/layer_locked_raw_sk\n* /workspace/sigrid-llm-lab/sigrid-llm-lab/sigrid-llm-lab/layer_locked_inst", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #mergekit #merge #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# output-model-directory\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* /workspace/sigrid-llm-lab/layer_locked_raw_sk\n* /workspace/sigrid-llm-lab/sigrid-llm-lab/sigrid-llm-lab/layer_locked_inst", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_1-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.2479 - F1 Score: 0.8899 - Accuracy: 0.8900 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5022 | 0.47 | 200 | 0.3628 | 0.8360 | 0.8360 | | 0.3809 | 0.95 | 400 | 0.3171 | 0.8588 | 0.8589 | | 0.3403 | 1.42 | 600 | 0.2940 | 0.8722 | 0.8723 | | 0.3318 | 1.9 | 800 | 0.2831 | 0.8742 | 0.8744 | | 0.3115 | 2.37 | 1000 | 0.2759 | 0.8757 | 0.8758 | | 0.3039 | 2.84 | 1200 | 0.2728 | 0.8785 | 0.8787 | | 0.2895 | 3.32 | 1400 | 0.2651 | 0.8808 | 0.8809 | | 0.2934 | 3.79 | 1600 | 0.2643 | 0.8829 | 0.8829 | | 0.2865 | 4.27 | 1800 | 0.2663 | 0.8832 | 0.8835 | | 0.2807 | 4.74 | 2000 | 0.2628 | 0.8841 | 0.8841 | | 0.282 | 5.21 | 2200 | 0.2592 | 0.8859 | 0.8861 | | 0.2762 | 5.69 | 2400 | 0.2551 | 0.8873 | 0.8873 | | 0.2743 | 6.16 | 2600 | 0.2550 | 0.8881 | 0.8882 | | 0.2698 | 6.64 | 2800 | 0.2528 | 0.8894 | 0.8894 | | 0.2758 | 7.11 | 3000 | 0.2541 | 0.8888 | 0.8888 | | 0.2661 | 7.58 | 3200 | 0.2570 | 0.8879 | 0.8879 | | 0.2729 | 8.06 | 3400 | 0.2482 | 0.8884 | 0.8885 | | 0.2621 | 8.53 | 3600 | 0.2524 | 0.8897 | 0.8897 | | 0.2682 | 9.0 | 3800 | 0.2485 | 0.8909 | 0.8909 | | 0.2611 | 9.48 | 4000 | 0.2493 | 0.8910 | 0.8912 | | 0.2657 | 9.95 | 4200 | 0.2482 | 0.8919 | 0.8919 | | 0.259 | 10.43 | 4400 | 0.2476 | 0.8903 | 0.8903 | | 0.2589 | 10.9 | 4600 | 0.2496 | 0.8924 | 0.8924 | | 0.254 | 11.37 | 4800 | 0.2481 | 0.8895 | 0.8895 | | 0.263 | 11.85 | 5000 | 0.2457 | 0.8916 | 0.8916 | | 0.2601 | 12.32 | 5200 | 0.2521 | 0.8880 | 0.8881 | | 0.2584 | 12.8 | 5400 | 0.2491 | 0.8909 | 0.8909 | | 0.2591 | 13.27 | 5600 | 0.2435 | 0.8895 | 0.8895 | | 0.252 | 13.74 | 5800 | 0.2433 | 0.8917 | 0.8918 | | 0.256 | 14.22 | 6000 | 0.2443 | 0.8907 | 0.8907 | | 0.2522 | 14.69 | 6200 | 0.2450 | 0.8923 | 0.8924 | | 0.2555 | 15.17 | 6400 | 0.2464 | 0.8885 | 0.8885 | | 0.2557 | 15.64 | 6600 | 0.2427 | 0.8907 | 0.8907 | | 0.2506 | 16.11 | 6800 | 0.2408 | 0.8923 | 0.8924 | | 0.2497 | 16.59 | 7000 | 0.2427 | 0.8922 | 0.8922 | | 0.2558 | 17.06 | 7200 | 0.2423 | 0.8921 | 0.8921 | | 0.2495 | 17.54 | 7400 | 0.2455 | 0.8906 | 0.8906 | | 0.2528 | 18.01 | 7600 | 0.2410 | 0.8919 | 0.8919 | | 0.25 | 18.48 | 7800 | 0.2424 | 0.8921 | 0.8921 | | 0.2518 | 18.96 | 8000 | 0.2404 | 0.8929 | 0.8930 | | 0.2499 | 19.43 | 8200 | 0.2430 | 0.8919 | 0.8919 | | 0.2512 | 19.91 | 8400 | 0.2399 | 0.8916 | 0.8916 | | 0.2519 | 20.38 | 8600 | 0.2407 | 0.8924 | 0.8924 | | 0.2464 | 20.85 | 8800 | 0.2395 | 0.8938 | 0.8938 | | 0.2462 | 21.33 | 9000 | 0.2405 | 0.8931 | 0.8931 | | 0.2465 | 21.8 | 9200 | 0.2414 | 0.8934 | 0.8934 | | 0.2502 | 22.27 | 9400 | 0.2405 | 0.8930 | 0.8930 | | 0.2446 | 22.75 | 9600 | 0.2399 | 0.8931 | 0.8931 | | 0.2504 | 23.22 | 9800 | 0.2400 | 0.8926 | 0.8927 | | 0.2509 | 23.7 | 10000 | 0.2402 | 0.8934 | 0.8934 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_1-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_1-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:23:02+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_mouse\_1-seqsight\_8192\_512\_30M-L1\_f ============================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset. It achieves the following results on the evaluation set: * Loss: 0.2479 * F1 Score: 0.8899 * Accuracy: 0.8900 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_0-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset. It achieves the following results on the evaluation set: - Loss: 1.0391 - F1 Score: 0.7235 - Accuracy: 0.7235 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.613 | 3.92 | 200 | 0.5489 | 0.7276 | 0.7284 | | 0.5242 | 7.84 | 400 | 0.5311 | 0.7370 | 0.7370 | | 0.4811 | 11.76 | 600 | 0.5174 | 0.7455 | 0.7457 | | 0.4366 | 15.69 | 800 | 0.5271 | 0.7463 | 0.7494 | | 0.3983 | 19.61 | 1000 | 0.5791 | 0.7438 | 0.7444 | | 0.3539 | 23.53 | 1200 | 0.6226 | 0.7650 | 0.7654 | | 0.3137 | 27.45 | 1400 | 0.6891 | 0.7506 | 0.7506 | | 0.2833 | 31.37 | 1600 | 0.7565 | 0.7388 | 0.7395 | | 0.2452 | 35.29 | 1800 | 0.7811 | 0.7330 | 0.7333 | | 0.2181 | 39.22 | 2000 | 0.9093 | 0.7487 | 0.7494 | | 0.1983 | 43.14 | 2200 | 0.9329 | 0.7527 | 0.7531 | | 0.1789 | 47.06 | 2400 | 0.9086 | 0.7543 | 0.7543 | | 0.1606 | 50.98 | 2600 | 0.9805 | 0.7654 | 0.7654 | | 0.1529 | 54.9 | 2800 | 0.9168 | 0.7615 | 0.7617 | | 0.1377 | 58.82 | 3000 | 1.0383 | 0.7419 | 0.7420 | | 0.1267 | 62.75 | 3200 | 1.0284 | 0.7506 | 0.7506 | | 0.1125 | 66.67 | 3400 | 1.1102 | 0.7479 | 0.7481 | | 0.104 | 70.59 | 3600 | 1.2252 | 0.7442 | 0.7444 | | 0.0937 | 74.51 | 3800 | 1.1755 | 0.7531 | 0.7531 | | 0.094 | 78.43 | 4000 | 1.2074 | 0.7432 | 0.7432 | | 0.0907 | 82.35 | 4200 | 1.2251 | 0.7420 | 0.7420 | | 0.079 | 86.27 | 4400 | 1.2857 | 0.7505 | 0.7506 | | 0.0765 | 90.2 | 4600 | 1.2619 | 0.7531 | 0.7531 | | 0.0733 | 94.12 | 4800 | 1.2980 | 0.7593 | 0.7593 | | 0.0688 | 98.04 | 5000 | 1.3034 | 0.7642 | 0.7642 | | 0.0658 | 101.96 | 5200 | 1.2959 | 0.7567 | 0.7568 | | 0.0614 | 105.88 | 5400 | 1.3782 | 0.7502 | 0.7506 | | 0.0607 | 109.8 | 5600 | 1.3433 | 0.7481 | 0.7481 | | 0.0589 | 113.73 | 5800 | 1.3985 | 0.7555 | 0.7556 | | 0.0547 | 117.65 | 6000 | 1.3775 | 0.7567 | 0.7568 | | 0.0517 | 121.57 | 6200 | 1.4986 | 0.7481 | 0.7481 | | 0.0518 | 125.49 | 6400 | 1.5264 | 0.7491 | 0.7494 | | 0.0487 | 129.41 | 6600 | 1.4869 | 0.7493 | 0.7494 | | 0.0467 | 133.33 | 6800 | 1.4509 | 0.7519 | 0.7519 | | 0.0477 | 137.25 | 7000 | 1.4770 | 0.7494 | 0.7494 | | 0.0465 | 141.18 | 7200 | 1.4356 | 0.7543 | 0.7543 | | 0.0409 | 145.1 | 7400 | 1.5309 | 0.7493 | 0.7494 | | 0.0415 | 149.02 | 7600 | 1.5781 | 0.7542 | 0.7543 | | 0.0373 | 152.94 | 7800 | 1.6046 | 0.7531 | 0.7531 | | 0.0396 | 156.86 | 8000 | 1.6092 | 0.7506 | 0.7506 | | 0.0375 | 160.78 | 8200 | 1.6032 | 0.7531 | 0.7531 | | 0.0354 | 164.71 | 8400 | 1.5828 | 0.7618 | 0.7617 | | 0.0372 | 168.63 | 8600 | 1.6199 | 0.7467 | 0.7469 | | 0.0338 | 172.55 | 8800 | 1.6226 | 0.7518 | 0.7519 | | 0.0348 | 176.47 | 9000 | 1.6164 | 0.7603 | 0.7605 | | 0.033 | 180.39 | 9200 | 1.5916 | 0.7518 | 0.7519 | | 0.0348 | 184.31 | 9400 | 1.5746 | 0.7555 | 0.7556 | | 0.0342 | 188.24 | 9600 | 1.5826 | 0.7543 | 0.7543 | | 0.0323 | 192.16 | 9800 | 1.5919 | 0.7506 | 0.7506 | | 0.03 | 196.08 | 10000 | 1.5983 | 0.7506 | 0.7506 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_0-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_0-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:23:02+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_mouse\_0-seqsight\_8192\_512\_30M-L32\_f ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset. It achieves the following results on the evaluation set: * Loss: 1.0391 * F1 Score: 0.7235 * Accuracy: 0.7235 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
transformers
# Uploaded model - **Developed by:** Mohamedshaaban2001 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
Mohamedshaaban2001/llama3_4
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:23:29+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: Mohamedshaaban2001 - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: Mohamedshaaban2001\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: Mohamedshaaban2001\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/adapter-llama-adaptertoxic2nontoxic-100-filtered-50-0.009
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:24:21+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eli5_dir This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the eli5_category dataset. It achieves the following results on the evaluation set: - Loss: 3.5573 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.6991 | 1.0 | 1314 | 3.5643 | | 3.5819 | 2.0 | 2628 | 3.5568 | | 3.5421 | 3.0 | 3942 | 3.5573 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["eli5_category"], "base_model": "gpt2", "model-index": [{"name": "eli5_dir", "results": []}]}
BohanJiang/eli5_dir
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "dataset:eli5_category", "base_model:gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T05:25:49+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #dataset-eli5_category #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
eli5\_dir ========= This model is a fine-tuned version of gpt2 on the eli5\_category dataset. It achieves the following results on the evaluation set: * Loss: 3.5573 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3.0 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #dataset-eli5_category #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2265 - Accuracy: 0.9387 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2122 | 1.0 | 1563 | 0.2055 | 0.9221 | | 0.1262 | 2.0 | 3126 | 0.2265 | 0.9387 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "bert-base-uncased", "model-index": [{"name": "my_awesome_model", "results": []}]}
WillXH/my_awesome_model
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:26:52+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
my\_awesome\_model ================== This model is a fine-tuned version of bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.2265 * Accuracy: 0.9387 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/1plso1l
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T05:28:02+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-audio
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
procit001/female_english_voice_v1.4
null
[ "transformers", "safetensors", "vits", "text-to-audio", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:28:44+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #vits #text-to-audio #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #vits #text-to-audio #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
ml-agents
# **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: hossniper/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget"]}
hossniper/ppo-SnowballTarget
null
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
null
2024-04-27T05:29:39+00:00
[]
[]
TAGS #ml-agents #tensorboard #onnx #SnowballTarget #deep-reinforcement-learning #reinforcement-learning #ML-Agents-SnowballTarget #region-us
# ppo Agent playing SnowballTarget This is a trained model of a ppo agent playing SnowballTarget using the Unity ML-Agents Library. ## Usage (with ML-Agents) The Documentation: URL We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your browser: URL - A *longer tutorial* to understand how works ML-Agents: URL ### Resume the training ### Watch your Agent play You can watch your agent playing directly in your browser 1. If the environment is part of ML-Agents official environments, go to URL 2. Step 1: Find your model_id: hossniper/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play
[ "# ppo Agent playing SnowballTarget\n This is a trained model of a ppo agent playing SnowballTarget\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: hossniper/ppo-SnowballTarget\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ "TAGS\n#ml-agents #tensorboard #onnx #SnowballTarget #deep-reinforcement-learning #reinforcement-learning #ML-Agents-SnowballTarget #region-us \n", "# ppo Agent playing SnowballTarget\n This is a trained model of a ppo agent playing SnowballTarget\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: hossniper/ppo-SnowballTarget\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
null
null
Apakah Dozerex Tablet? Dozerex harga ialah kapsul kesihatan lelaki berkualiti premium yang diformulasikan untuk menyokong tahap kecergasan dan tenaga. Formula termajunya menggabungkan gabungan sinergistik vitamin, mineral dan ekstrak herba, yang dipilih khusus untuk menggalakkan kesihatan dan kesejahteraan optimum pada lelaki. Laman web rasmi:<a href="https://www.nutritionsee.com/dozermlaysi">www.Dozerex.com</a> <p><a href="https://www.nutritionsee.com/dozermlaysi"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/04/Dozerex-Malaysia-1.png" alt="enter image description here"> </a></p> <a href="https://www.nutritionsee.com/dozermlaysi">Beli sekarang!! Klik pautan di bawah untuk maklumat lanjut dan dapatkan diskaun 50% sekarang... Cepat</a> Laman web rasmi:<a href="https://www.nutritionsee.com/dozermlaysi">www.Dozerex.com</a>
{"license": "apache-2.0"}
DozerexMalaysia/Dozerex
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-27T05:31:26+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
Apakah Dozerex Tablet? Dozerex harga ialah kapsul kesihatan lelaki berkualiti premium yang diformulasikan untuk menyokong tahap kecergasan dan tenaga. Formula termajunya menggabungkan gabungan sinergistik vitamin, mineral dan ekstrak herba, yang dipilih khusus untuk menggalakkan kesihatan dan kesejahteraan optimum pada lelaki. Laman web rasmi:<a href="URL <p><a href="URL <img src="URL alt="enter image description here"> </a></p> <a href="URL sekarang!! Klik pautan di bawah untuk maklumat lanjut dan dapatkan diskaun 50% sekarang... Cepat</a> Laman web rasmi:<a href="URL
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # pavanch121/distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1160 - Validation Loss: 0.3811 - Train Precision: 0.5648 - Train Recall: 0.3291 - Train F1: 0.4159 - Train Accuracy: 0.9237 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 636, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 0.3179 | 0.4210 | 0.4599 | 0.1002 | 0.1645 | 0.9054 | 0 | | 0.1493 | 0.3804 | 0.5184 | 0.3029 | 0.3823 | 0.9203 | 1 | | 0.1160 | 0.3811 | 0.5648 | 0.3291 | 0.4159 | 0.9237 | 2 | ### Framework versions - Transformers 4.40.0 - TensorFlow 2.15.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "pavanch121/distilbert-base-uncased-finetuned-ner", "results": []}]}
pavanch121/distilbert-base-uncased-finetuned-ner
null
[ "transformers", "tf", "tensorboard", "distilbert", "token-classification", "generated_from_keras_callback", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:31:26+00:00
[]
[]
TAGS #transformers #tf #tensorboard #distilbert #token-classification #generated_from_keras_callback #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
pavanch121/distilbert-base-uncased-finetuned-ner ================================================ This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.1160 * Validation Loss: 0.3811 * Train Precision: 0.5648 * Train Recall: 0.3291 * Train F1: 0.4159 * Train Accuracy: 0.9237 * Epoch: 2 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 636, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\_decay\_rate': 0.01} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.40.0 * TensorFlow 2.15.0 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 636, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tf #tensorboard #distilbert #token-classification #generated_from_keras_callback #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 636, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - AdityaNath/Jap_Arch_LoRA <Gallery /> ## Model description These are AdityaNath/Jap_Arch_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of Jap_Arch Architecture to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](AdityaNath/Jap_Arch_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of Jap_Arch Architecture", "widget": []}
AdityaNath/Jap_Arch_LoRA
null
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-27T05:31:54+00:00
[]
[]
TAGS #diffusers #tensorboard #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
# SDXL LoRA DreamBooth - AdityaNath/Jap_Arch_LoRA <Gallery /> ## Model description These are AdityaNath/Jap_Arch_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of Jap_Arch Architecture to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab. ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# SDXL LoRA DreamBooth - AdityaNath/Jap_Arch_LoRA\n\n<Gallery />", "## Model description\n\nThese are AdityaNath/Jap_Arch_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a photo of Jap_Arch Architecture to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #tensorboard #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n", "# SDXL LoRA DreamBooth - AdityaNath/Jap_Arch_LoRA\n\n<Gallery />", "## Model description\n\nThese are AdityaNath/Jap_Arch_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a photo of Jap_Arch Architecture to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_1-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.2336 - F1 Score: 0.8980 - Accuracy: 0.8980 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.446 | 0.47 | 200 | 0.3149 | 0.8595 | 0.8595 | | 0.3297 | 0.95 | 400 | 0.2748 | 0.8794 | 0.8795 | | 0.2983 | 1.42 | 600 | 0.2600 | 0.8850 | 0.8851 | | 0.2929 | 1.9 | 800 | 0.2536 | 0.8886 | 0.8887 | | 0.2758 | 2.37 | 1000 | 0.2500 | 0.8900 | 0.8900 | | 0.2654 | 2.84 | 1200 | 0.2445 | 0.8919 | 0.8921 | | 0.2546 | 3.32 | 1400 | 0.2403 | 0.8944 | 0.8944 | | 0.2594 | 3.79 | 1600 | 0.2435 | 0.8955 | 0.8955 | | 0.2512 | 4.27 | 1800 | 0.2406 | 0.8974 | 0.8976 | | 0.2462 | 4.74 | 2000 | 0.2467 | 0.8947 | 0.8947 | | 0.2493 | 5.21 | 2200 | 0.2385 | 0.8968 | 0.8970 | | 0.2445 | 5.69 | 2400 | 0.2371 | 0.8984 | 0.8984 | | 0.2405 | 6.16 | 2600 | 0.2362 | 0.8963 | 0.8965 | | 0.239 | 6.64 | 2800 | 0.2367 | 0.8971 | 0.8971 | | 0.2406 | 7.11 | 3000 | 0.2345 | 0.8986 | 0.8986 | | 0.2331 | 7.58 | 3200 | 0.2425 | 0.8961 | 0.8961 | | 0.2403 | 8.06 | 3400 | 0.2270 | 0.9018 | 0.9019 | | 0.2318 | 8.53 | 3600 | 0.2334 | 0.9011 | 0.9011 | | 0.2378 | 9.0 | 3800 | 0.2284 | 0.9021 | 0.9021 | | 0.2284 | 9.48 | 4000 | 0.2290 | 0.9033 | 0.9033 | | 0.2333 | 9.95 | 4200 | 0.2279 | 0.9026 | 0.9026 | | 0.2276 | 10.43 | 4400 | 0.2298 | 0.9020 | 0.9020 | | 0.2266 | 10.9 | 4600 | 0.2311 | 0.9011 | 0.9011 | | 0.2218 | 11.37 | 4800 | 0.2346 | 0.8990 | 0.8990 | | 0.2308 | 11.85 | 5000 | 0.2291 | 0.9022 | 0.9023 | | 0.2272 | 12.32 | 5200 | 0.2355 | 0.8962 | 0.8962 | | 0.2263 | 12.8 | 5400 | 0.2331 | 0.9004 | 0.9004 | | 0.2254 | 13.27 | 5600 | 0.2235 | 0.9026 | 0.9026 | | 0.2199 | 13.74 | 5800 | 0.2265 | 0.9045 | 0.9045 | | 0.2236 | 14.22 | 6000 | 0.2323 | 0.9010 | 0.9010 | | 0.219 | 14.69 | 6200 | 0.2272 | 0.9063 | 0.9063 | | 0.2209 | 15.17 | 6400 | 0.2320 | 0.9010 | 0.9010 | | 0.2213 | 15.64 | 6600 | 0.2243 | 0.9044 | 0.9044 | | 0.2155 | 16.11 | 6800 | 0.2260 | 0.9049 | 0.9050 | | 0.2151 | 16.59 | 7000 | 0.2341 | 0.9007 | 0.9007 | | 0.2209 | 17.06 | 7200 | 0.2245 | 0.9030 | 0.9030 | | 0.2149 | 17.54 | 7400 | 0.2291 | 0.9020 | 0.9020 | | 0.2171 | 18.01 | 7600 | 0.2228 | 0.9056 | 0.9056 | | 0.2146 | 18.48 | 7800 | 0.2288 | 0.9033 | 0.9033 | | 0.2202 | 18.96 | 8000 | 0.2217 | 0.9067 | 0.9067 | | 0.2125 | 19.43 | 8200 | 0.2289 | 0.9030 | 0.9030 | | 0.2152 | 19.91 | 8400 | 0.2247 | 0.9058 | 0.9059 | | 0.2161 | 20.38 | 8600 | 0.2269 | 0.9029 | 0.9029 | | 0.2133 | 20.85 | 8800 | 0.2236 | 0.9054 | 0.9054 | | 0.2105 | 21.33 | 9000 | 0.2246 | 0.9044 | 0.9044 | | 0.2108 | 21.8 | 9200 | 0.2271 | 0.9038 | 0.9038 | | 0.2137 | 22.27 | 9400 | 0.2250 | 0.9045 | 0.9045 | | 0.2097 | 22.75 | 9600 | 0.2235 | 0.9053 | 0.9053 | | 0.2136 | 23.22 | 9800 | 0.2240 | 0.9045 | 0.9045 | | 0.2164 | 23.7 | 10000 | 0.2241 | 0.9050 | 0.9050 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_1-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_1-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:33:25+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_mouse\_1-seqsight\_8192\_512\_30M-L8\_f ============================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset. It achieves the following results on the evaluation set: * Loss: 0.2336 * F1 Score: 0.8980 * Accuracy: 0.8980 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_1-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.2331 - F1 Score: 0.9027 - Accuracy: 0.9027 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.4165 | 0.47 | 200 | 0.2963 | 0.8706 | 0.8706 | | 0.3038 | 0.95 | 400 | 0.2607 | 0.8855 | 0.8855 | | 0.2779 | 1.42 | 600 | 0.2447 | 0.8928 | 0.8928 | | 0.2724 | 1.9 | 800 | 0.2473 | 0.8930 | 0.8930 | | 0.2578 | 2.37 | 1000 | 0.2450 | 0.8952 | 0.8952 | | 0.2486 | 2.84 | 1200 | 0.2324 | 0.8978 | 0.8979 | | 0.2404 | 3.32 | 1400 | 0.2364 | 0.9021 | 0.9021 | | 0.2443 | 3.79 | 1600 | 0.2320 | 0.9008 | 0.9008 | | 0.2377 | 4.27 | 1800 | 0.2301 | 0.9030 | 0.9030 | | 0.2336 | 4.74 | 2000 | 0.2416 | 0.8990 | 0.8990 | | 0.2348 | 5.21 | 2200 | 0.2311 | 0.9018 | 0.9020 | | 0.2306 | 5.69 | 2400 | 0.2322 | 0.9009 | 0.9010 | | 0.2269 | 6.16 | 2600 | 0.2250 | 0.9038 | 0.9039 | | 0.2256 | 6.64 | 2800 | 0.2328 | 0.9006 | 0.9007 | | 0.2236 | 7.11 | 3000 | 0.2297 | 0.8999 | 0.8999 | | 0.2151 | 7.58 | 3200 | 0.2326 | 0.9017 | 0.9017 | | 0.2253 | 8.06 | 3400 | 0.2190 | 0.9035 | 0.9035 | | 0.213 | 8.53 | 3600 | 0.2303 | 0.9039 | 0.9039 | | 0.2205 | 9.0 | 3800 | 0.2221 | 0.9070 | 0.9070 | | 0.2111 | 9.48 | 4000 | 0.2212 | 0.9048 | 0.9048 | | 0.2136 | 9.95 | 4200 | 0.2193 | 0.9064 | 0.9064 | | 0.2083 | 10.43 | 4400 | 0.2244 | 0.9054 | 0.9054 | | 0.208 | 10.9 | 4600 | 0.2238 | 0.9047 | 0.9047 | | 0.2019 | 11.37 | 4800 | 0.2229 | 0.9069 | 0.9069 | | 0.2094 | 11.85 | 5000 | 0.2241 | 0.9063 | 0.9063 | | 0.2044 | 12.32 | 5200 | 0.2303 | 0.9014 | 0.9014 | | 0.2034 | 12.8 | 5400 | 0.2306 | 0.9070 | 0.9070 | | 0.2007 | 13.27 | 5600 | 0.2203 | 0.9079 | 0.9079 | | 0.1984 | 13.74 | 5800 | 0.2237 | 0.9069 | 0.9069 | | 0.2013 | 14.22 | 6000 | 0.2351 | 0.9013 | 0.9013 | | 0.1946 | 14.69 | 6200 | 0.2232 | 0.9085 | 0.9085 | | 0.1978 | 15.17 | 6400 | 0.2263 | 0.9057 | 0.9057 | | 0.1959 | 15.64 | 6600 | 0.2242 | 0.9064 | 0.9064 | | 0.1917 | 16.11 | 6800 | 0.2255 | 0.9061 | 0.9062 | | 0.1874 | 16.59 | 7000 | 0.2316 | 0.9045 | 0.9045 | | 0.1962 | 17.06 | 7200 | 0.2231 | 0.9076 | 0.9076 | | 0.1867 | 17.54 | 7400 | 0.2283 | 0.9063 | 0.9063 | | 0.1898 | 18.01 | 7600 | 0.2215 | 0.9079 | 0.9079 | | 0.1861 | 18.48 | 7800 | 0.2292 | 0.9039 | 0.9039 | | 0.1913 | 18.96 | 8000 | 0.2219 | 0.9082 | 0.9082 | | 0.1844 | 19.43 | 8200 | 0.2305 | 0.9042 | 0.9042 | | 0.1883 | 19.91 | 8400 | 0.2268 | 0.9073 | 0.9073 | | 0.1852 | 20.38 | 8600 | 0.2343 | 0.9038 | 0.9038 | | 0.1831 | 20.85 | 8800 | 0.2269 | 0.9079 | 0.9079 | | 0.1816 | 21.33 | 9000 | 0.2298 | 0.9036 | 0.9036 | | 0.1808 | 21.8 | 9200 | 0.2305 | 0.9030 | 0.9030 | | 0.1833 | 22.27 | 9400 | 0.2284 | 0.9045 | 0.9045 | | 0.1769 | 22.75 | 9600 | 0.2287 | 0.9070 | 0.9070 | | 0.1815 | 23.22 | 9800 | 0.2289 | 0.9064 | 0.9064 | | 0.1843 | 23.7 | 10000 | 0.2288 | 0.9048 | 0.9048 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_1-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_1-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:33:49+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_mouse\_1-seqsight\_8192\_512\_30M-L32\_f ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset. It achieves the following results on the evaluation set: * Loss: 0.2331 * F1 Score: 0.9027 * Accuracy: 0.9027 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_4-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.5717 - F1 Score: 0.7021 - Accuracy: 0.7021 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6553 | 1.69 | 200 | 0.6203 | 0.6413 | 0.6421 | | 0.6235 | 3.39 | 400 | 0.6047 | 0.6544 | 0.6543 | | 0.6066 | 5.08 | 600 | 0.5917 | 0.6692 | 0.6691 | | 0.5969 | 6.78 | 800 | 0.5804 | 0.6836 | 0.6835 | | 0.5883 | 8.47 | 1000 | 0.5717 | 0.6856 | 0.6856 | | 0.5805 | 10.17 | 1200 | 0.5665 | 0.6989 | 0.6989 | | 0.5744 | 11.86 | 1400 | 0.5588 | 0.7068 | 0.7074 | | 0.5687 | 13.56 | 1600 | 0.5531 | 0.7111 | 0.7111 | | 0.5621 | 15.25 | 1800 | 0.5536 | 0.7169 | 0.7175 | | 0.5579 | 16.95 | 2000 | 0.5514 | 0.7116 | 0.7122 | | 0.555 | 18.64 | 2200 | 0.5498 | 0.7143 | 0.7148 | | 0.554 | 20.34 | 2400 | 0.5472 | 0.7173 | 0.7175 | | 0.5522 | 22.03 | 2600 | 0.5602 | 0.7036 | 0.7063 | | 0.5492 | 23.73 | 2800 | 0.5442 | 0.7234 | 0.7233 | | 0.5455 | 25.42 | 3000 | 0.5447 | 0.7194 | 0.7196 | | 0.5446 | 27.12 | 3200 | 0.5541 | 0.7038 | 0.7063 | | 0.5418 | 28.81 | 3400 | 0.5449 | 0.7240 | 0.7244 | | 0.5385 | 30.51 | 3600 | 0.5404 | 0.7277 | 0.7276 | | 0.5376 | 32.2 | 3800 | 0.5398 | 0.7313 | 0.7313 | | 0.538 | 33.9 | 4000 | 0.5468 | 0.7242 | 0.7249 | | 0.5312 | 35.59 | 4200 | 0.5471 | 0.7261 | 0.7265 | | 0.5362 | 37.29 | 4400 | 0.5402 | 0.7313 | 0.7313 | | 0.5308 | 38.98 | 4600 | 0.5377 | 0.7287 | 0.7286 | | 0.5299 | 40.68 | 4800 | 0.5457 | 0.7234 | 0.7244 | | 0.5245 | 42.37 | 5000 | 0.5421 | 0.7348 | 0.7350 | | 0.5284 | 44.07 | 5200 | 0.5382 | 0.7398 | 0.7398 | | 0.5243 | 45.76 | 5400 | 0.5384 | 0.7342 | 0.7345 | | 0.5236 | 47.46 | 5600 | 0.5374 | 0.7393 | 0.7392 | | 0.5267 | 49.15 | 5800 | 0.5378 | 0.7351 | 0.7355 | | 0.5217 | 50.85 | 6000 | 0.5371 | 0.7332 | 0.7334 | | 0.5249 | 52.54 | 6200 | 0.5338 | 0.7382 | 0.7382 | | 0.5209 | 54.24 | 6400 | 0.5371 | 0.7327 | 0.7329 | | 0.5222 | 55.93 | 6600 | 0.5350 | 0.7387 | 0.7387 | | 0.5191 | 57.63 | 6800 | 0.5358 | 0.7388 | 0.7387 | | 0.519 | 59.32 | 7000 | 0.5411 | 0.7307 | 0.7313 | | 0.5174 | 61.02 | 7200 | 0.5345 | 0.7409 | 0.7408 | | 0.5175 | 62.71 | 7400 | 0.5361 | 0.7382 | 0.7382 | | 0.5162 | 64.41 | 7600 | 0.5360 | 0.7327 | 0.7329 | | 0.5175 | 66.1 | 7800 | 0.5352 | 0.7317 | 0.7318 | | 0.5172 | 67.8 | 8000 | 0.5342 | 0.7350 | 0.7350 | | 0.5136 | 69.49 | 8200 | 0.5342 | 0.7340 | 0.7339 | | 0.5157 | 71.19 | 8400 | 0.5347 | 0.7349 | 0.7350 | | 0.5145 | 72.88 | 8600 | 0.5341 | 0.7388 | 0.7387 | | 0.5138 | 74.58 | 8800 | 0.5362 | 0.7348 | 0.7350 | | 0.5118 | 76.27 | 9000 | 0.5353 | 0.7360 | 0.7361 | | 0.5148 | 77.97 | 9200 | 0.5372 | 0.7316 | 0.7318 | | 0.5127 | 79.66 | 9400 | 0.5351 | 0.7361 | 0.7361 | | 0.5109 | 81.36 | 9600 | 0.5358 | 0.7338 | 0.7339 | | 0.5141 | 83.05 | 9800 | 0.5353 | 0.7355 | 0.7355 | | 0.51 | 84.75 | 10000 | 0.5356 | 0.7338 | 0.7339 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_4-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_4-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:34:15+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_mouse\_4-seqsight\_8192\_512\_30M-L1\_f ============================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset. It achieves the following results on the evaluation set: * Loss: 0.5717 * F1 Score: 0.7021 * Accuracy: 0.7021 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_4-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.6126 - F1 Score: 0.6998 - Accuracy: 0.6999 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6406 | 1.69 | 200 | 0.6052 | 0.6636 | 0.6654 | | 0.6016 | 3.39 | 400 | 0.5785 | 0.6849 | 0.6856 | | 0.5745 | 5.08 | 600 | 0.5611 | 0.7072 | 0.7095 | | 0.5629 | 6.78 | 800 | 0.5499 | 0.7169 | 0.7175 | | 0.5536 | 8.47 | 1000 | 0.5510 | 0.7174 | 0.7185 | | 0.5444 | 10.17 | 1200 | 0.5478 | 0.7220 | 0.7228 | | 0.5396 | 11.86 | 1400 | 0.5411 | 0.7296 | 0.7297 | | 0.5321 | 13.56 | 1600 | 0.5419 | 0.7308 | 0.7313 | | 0.5251 | 15.25 | 1800 | 0.5469 | 0.7247 | 0.7254 | | 0.5194 | 16.95 | 2000 | 0.5464 | 0.7303 | 0.7318 | | 0.5131 | 18.64 | 2200 | 0.5617 | 0.7197 | 0.7233 | | 0.5114 | 20.34 | 2400 | 0.5442 | 0.7282 | 0.7281 | | 0.5074 | 22.03 | 2600 | 0.5555 | 0.7256 | 0.7265 | | 0.4998 | 23.73 | 2800 | 0.5419 | 0.7308 | 0.7307 | | 0.4942 | 25.42 | 3000 | 0.5530 | 0.7242 | 0.7254 | | 0.4927 | 27.12 | 3200 | 0.5530 | 0.7265 | 0.7270 | | 0.4861 | 28.81 | 3400 | 0.5565 | 0.7246 | 0.7249 | | 0.481 | 30.51 | 3600 | 0.5561 | 0.7266 | 0.7265 | | 0.479 | 32.2 | 3800 | 0.5578 | 0.7290 | 0.7292 | | 0.4805 | 33.9 | 4000 | 0.5657 | 0.7225 | 0.7228 | | 0.4664 | 35.59 | 4200 | 0.5717 | 0.7165 | 0.7175 | | 0.4697 | 37.29 | 4400 | 0.5633 | 0.7248 | 0.7249 | | 0.4618 | 38.98 | 4600 | 0.5758 | 0.7346 | 0.7350 | | 0.4588 | 40.68 | 4800 | 0.5711 | 0.7144 | 0.7153 | | 0.4515 | 42.37 | 5000 | 0.5816 | 0.7250 | 0.7249 | | 0.4543 | 44.07 | 5200 | 0.5856 | 0.7201 | 0.7201 | | 0.4511 | 45.76 | 5400 | 0.5703 | 0.7215 | 0.7217 | | 0.4462 | 47.46 | 5600 | 0.5776 | 0.7287 | 0.7286 | | 0.4482 | 49.15 | 5800 | 0.5725 | 0.7174 | 0.7180 | | 0.4399 | 50.85 | 6000 | 0.5715 | 0.7314 | 0.7313 | | 0.4409 | 52.54 | 6200 | 0.5766 | 0.7381 | 0.7382 | | 0.4337 | 54.24 | 6400 | 0.5738 | 0.7198 | 0.7201 | | 0.4332 | 55.93 | 6600 | 0.5786 | 0.7249 | 0.7249 | | 0.4295 | 57.63 | 6800 | 0.5863 | 0.7271 | 0.7270 | | 0.4284 | 59.32 | 7000 | 0.5902 | 0.7162 | 0.7164 | | 0.4261 | 61.02 | 7200 | 0.5840 | 0.7228 | 0.7228 | | 0.4232 | 62.71 | 7400 | 0.5878 | 0.7345 | 0.7345 | | 0.4201 | 64.41 | 7600 | 0.5917 | 0.7266 | 0.7265 | | 0.4209 | 66.1 | 7800 | 0.5925 | 0.7254 | 0.7254 | | 0.4204 | 67.8 | 8000 | 0.5818 | 0.7282 | 0.7281 | | 0.414 | 69.49 | 8200 | 0.5877 | 0.7298 | 0.7297 | | 0.4171 | 71.19 | 8400 | 0.5855 | 0.7335 | 0.7334 | | 0.4147 | 72.88 | 8600 | 0.5864 | 0.7330 | 0.7329 | | 0.4123 | 74.58 | 8800 | 0.5875 | 0.7260 | 0.7260 | | 0.4137 | 76.27 | 9000 | 0.5882 | 0.7302 | 0.7302 | | 0.4089 | 77.97 | 9200 | 0.5970 | 0.7270 | 0.7270 | | 0.4101 | 79.66 | 9400 | 0.5938 | 0.7282 | 0.7281 | | 0.4052 | 81.36 | 9600 | 0.5939 | 0.7270 | 0.7270 | | 0.4093 | 83.05 | 9800 | 0.5921 | 0.7265 | 0.7265 | | 0.4066 | 84.75 | 10000 | 0.5929 | 0.7281 | 0.7281 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_4-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_4-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:36:09+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_mouse\_4-seqsight\_8192\_512\_30M-L8\_f ============================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset. It achieves the following results on the evaluation set: * Loss: 0.6126 * F1 Score: 0.6998 * Accuracy: 0.6999 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_4-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.5704 - F1 Score: 0.7011 - Accuracy: 0.7015 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6317 | 1.69 | 200 | 0.5983 | 0.6722 | 0.6760 | | 0.5856 | 3.39 | 400 | 0.5636 | 0.6984 | 0.6984 | | 0.5537 | 5.08 | 600 | 0.5532 | 0.7112 | 0.7138 | | 0.5382 | 6.78 | 800 | 0.5450 | 0.7386 | 0.7387 | | 0.5252 | 8.47 | 1000 | 0.5479 | 0.7290 | 0.7297 | | 0.5047 | 10.17 | 1200 | 0.5433 | 0.7203 | 0.7207 | | 0.4949 | 11.86 | 1400 | 0.5478 | 0.7263 | 0.7270 | | 0.4789 | 13.56 | 1600 | 0.5500 | 0.7245 | 0.7249 | | 0.4638 | 15.25 | 1800 | 0.5529 | 0.7276 | 0.7276 | | 0.4478 | 16.95 | 2000 | 0.5669 | 0.7104 | 0.7116 | | 0.432 | 18.64 | 2200 | 0.5694 | 0.7255 | 0.7260 | | 0.422 | 20.34 | 2400 | 0.5838 | 0.7282 | 0.7281 | | 0.4084 | 22.03 | 2600 | 0.5957 | 0.7314 | 0.7313 | | 0.3935 | 23.73 | 2800 | 0.5820 | 0.7313 | 0.7313 | | 0.382 | 25.42 | 3000 | 0.6444 | 0.7235 | 0.7249 | | 0.3741 | 27.12 | 3200 | 0.6335 | 0.7254 | 0.7254 | | 0.3597 | 28.81 | 3400 | 0.6612 | 0.7186 | 0.7185 | | 0.3444 | 30.51 | 3600 | 0.6478 | 0.7213 | 0.7212 | | 0.3428 | 32.2 | 3800 | 0.6803 | 0.7223 | 0.7223 | | 0.3379 | 33.9 | 4000 | 0.6703 | 0.7168 | 0.7169 | | 0.312 | 35.59 | 4200 | 0.7018 | 0.7139 | 0.7143 | | 0.3171 | 37.29 | 4400 | 0.6989 | 0.7212 | 0.7212 | | 0.2973 | 38.98 | 4600 | 0.7242 | 0.7190 | 0.7191 | | 0.2929 | 40.68 | 4800 | 0.7338 | 0.7101 | 0.7100 | | 0.2837 | 42.37 | 5000 | 0.7864 | 0.7176 | 0.7175 | | 0.2818 | 44.07 | 5200 | 0.7733 | 0.7181 | 0.7180 | | 0.2745 | 45.76 | 5400 | 0.7912 | 0.7123 | 0.7122 | | 0.2673 | 47.46 | 5600 | 0.8100 | 0.7235 | 0.7244 | | 0.2611 | 49.15 | 5800 | 0.7809 | 0.7117 | 0.7116 | | 0.2597 | 50.85 | 6000 | 0.7785 | 0.7138 | 0.7138 | | 0.2481 | 52.54 | 6200 | 0.8297 | 0.7132 | 0.7132 | | 0.2423 | 54.24 | 6400 | 0.8508 | 0.7016 | 0.7015 | | 0.2402 | 55.93 | 6600 | 0.8418 | 0.7085 | 0.7084 | | 0.2325 | 57.63 | 6800 | 0.8314 | 0.7112 | 0.7111 | | 0.2315 | 59.32 | 7000 | 0.8885 | 0.7117 | 0.7116 | | 0.2254 | 61.02 | 7200 | 0.8921 | 0.7074 | 0.7074 | | 0.2231 | 62.71 | 7400 | 0.9142 | 0.7184 | 0.7185 | | 0.2159 | 64.41 | 7600 | 0.9128 | 0.7105 | 0.7111 | | 0.2149 | 66.1 | 7800 | 0.9018 | 0.7139 | 0.7138 | | 0.2137 | 67.8 | 8000 | 0.9168 | 0.7043 | 0.7042 | | 0.2092 | 69.49 | 8200 | 0.9040 | 0.7135 | 0.7138 | | 0.2042 | 71.19 | 8400 | 0.9157 | 0.7102 | 0.7106 | | 0.2061 | 72.88 | 8600 | 0.8987 | 0.7109 | 0.7111 | | 0.2004 | 74.58 | 8800 | 0.9239 | 0.7089 | 0.7090 | | 0.202 | 76.27 | 9000 | 0.9158 | 0.7095 | 0.7095 | | 0.1969 | 77.97 | 9200 | 0.9263 | 0.7048 | 0.7047 | | 0.1947 | 79.66 | 9400 | 0.9382 | 0.7039 | 0.7042 | | 0.1934 | 81.36 | 9600 | 0.9429 | 0.7052 | 0.7053 | | 0.1914 | 83.05 | 9800 | 0.9465 | 0.7024 | 0.7026 | | 0.192 | 84.75 | 10000 | 0.9481 | 0.7029 | 0.7031 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_4-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_4-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:36:29+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_mouse\_4-seqsight\_8192\_512\_30M-L32\_f ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset. It achieves the following results on the evaluation set: * Loss: 0.5704 * F1 Score: 0.7011 * Accuracy: 0.7015 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dinov2-s-201 This model is a fine-tuned version of [facebook/dinov2-small-imagenet1k-1-layer](https://huggingface.co/facebook/dinov2-small-imagenet1k-1-layer) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5503 - Accuracy: 0.8049 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 5 | 1.7244 | 0.2195 | | 1.4057 | 2.0 | 10 | 1.1285 | 0.5122 | | 1.4057 | 3.0 | 15 | 0.6513 | 0.7561 | | 0.8392 | 4.0 | 20 | 0.5946 | 0.8049 | | 0.8392 | 5.0 | 25 | 0.6221 | 0.8293 | | 0.6571 | 6.0 | 30 | 1.3668 | 0.4878 | | 0.6571 | 7.0 | 35 | 0.6909 | 0.6585 | | 0.7314 | 8.0 | 40 | 0.6185 | 0.7073 | | 0.7314 | 9.0 | 45 | 1.1204 | 0.5122 | | 0.6679 | 10.0 | 50 | 0.6920 | 0.7073 | | 0.6679 | 11.0 | 55 | 0.5515 | 0.7561 | | 0.5023 | 12.0 | 60 | 0.8328 | 0.6829 | | 0.5023 | 13.0 | 65 | 0.5849 | 0.7805 | | 0.5507 | 14.0 | 70 | 0.4574 | 0.8293 | | 0.5507 | 15.0 | 75 | 0.7229 | 0.7317 | | 0.4605 | 16.0 | 80 | 0.6463 | 0.6829 | | 0.4605 | 17.0 | 85 | 0.5158 | 0.7805 | | 0.3592 | 18.0 | 90 | 0.5429 | 0.7317 | | 0.3592 | 19.0 | 95 | 0.4544 | 0.8293 | | 0.3719 | 20.0 | 100 | 0.5683 | 0.7805 | | 0.3719 | 21.0 | 105 | 0.7423 | 0.7073 | | 0.4792 | 22.0 | 110 | 0.6053 | 0.7561 | | 0.4792 | 23.0 | 115 | 0.5218 | 0.8049 | | 0.3421 | 24.0 | 120 | 0.5553 | 0.8049 | | 0.3421 | 25.0 | 125 | 0.6367 | 0.7805 | | 0.3528 | 26.0 | 130 | 0.3843 | 0.8049 | | 0.3528 | 27.0 | 135 | 0.6923 | 0.7317 | | 0.3335 | 28.0 | 140 | 0.6799 | 0.7073 | | 0.3335 | 29.0 | 145 | 1.0437 | 0.6098 | | 0.2933 | 30.0 | 150 | 0.8362 | 0.7073 | | 0.2933 | 31.0 | 155 | 0.6174 | 0.7073 | | 0.2902 | 32.0 | 160 | 0.5487 | 0.8780 | | 0.2902 | 33.0 | 165 | 0.6631 | 0.8049 | | 0.3046 | 34.0 | 170 | 0.7015 | 0.7561 | | 0.3046 | 35.0 | 175 | 0.5250 | 0.8049 | | 0.2355 | 36.0 | 180 | 0.6684 | 0.8537 | | 0.2355 | 37.0 | 185 | 0.5820 | 0.7805 | | 0.21 | 38.0 | 190 | 0.7903 | 0.7805 | | 0.21 | 39.0 | 195 | 0.4358 | 0.9024 | | 0.1833 | 40.0 | 200 | 0.8039 | 0.8293 | | 0.1833 | 41.0 | 205 | 0.6242 | 0.8537 | | 0.2227 | 42.0 | 210 | 0.7574 | 0.7073 | | 0.2227 | 43.0 | 215 | 0.8873 | 0.7561 | | 0.1831 | 44.0 | 220 | 0.9501 | 0.7561 | | 0.1831 | 45.0 | 225 | 0.8774 | 0.8293 | | 0.1815 | 46.0 | 230 | 0.7826 | 0.8049 | | 0.1815 | 47.0 | 235 | 1.1516 | 0.6829 | | 0.1615 | 48.0 | 240 | 0.6514 | 0.8537 | | 0.1615 | 49.0 | 245 | 0.5799 | 0.8049 | | 0.1381 | 50.0 | 250 | 0.7545 | 0.7805 | | 0.1381 | 51.0 | 255 | 0.5452 | 0.8049 | | 0.1462 | 52.0 | 260 | 0.7610 | 0.8049 | | 0.1462 | 53.0 | 265 | 0.7827 | 0.8049 | | 0.1096 | 54.0 | 270 | 0.6393 | 0.8537 | | 0.1096 | 55.0 | 275 | 0.5902 | 0.8293 | | 0.0914 | 56.0 | 280 | 0.7998 | 0.8537 | | 0.0914 | 57.0 | 285 | 0.9032 | 0.7805 | | 0.1674 | 58.0 | 290 | 0.5467 | 0.8537 | | 0.1674 | 59.0 | 295 | 0.9872 | 0.7805 | | 0.086 | 60.0 | 300 | 0.6481 | 0.8537 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "facebook/dinov2-small-imagenet1k-1-layer", "model-index": [{"name": "dinov2-s-201", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.8048780487804879, "name": "Accuracy"}]}]}]}
niraj003/dinov2-s-201
null
[ "transformers", "tensorboard", "safetensors", "dinov2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/dinov2-small-imagenet1k-1-layer", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:38:05+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #dinov2 #image-classification #generated_from_trainer #dataset-imagefolder #base_model-facebook/dinov2-small-imagenet1k-1-layer #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
dinov2-s-201 ============ This model is a fine-tuned version of facebook/dinov2-small-imagenet1k-1-layer on the imagefolder dataset. It achieves the following results on the evaluation set: * Loss: 0.5503 * Accuracy: 0.8049 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 60 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 60", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #dinov2 #image-classification #generated_from_trainer #dataset-imagefolder #base_model-facebook/dinov2-small-imagenet1k-1-layer #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 60", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/56fpct9
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T05:40:45+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_3-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.5379 - F1 Score: 0.8535 - Accuracy: 0.8536 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.613 | 13.33 | 200 | 0.5346 | 0.7193 | 0.7197 | | 0.5019 | 26.67 | 400 | 0.4513 | 0.7947 | 0.7950 | | 0.4154 | 40.0 | 600 | 0.3783 | 0.8451 | 0.8452 | | 0.348 | 53.33 | 800 | 0.3802 | 0.8452 | 0.8452 | | 0.2999 | 66.67 | 1000 | 0.3966 | 0.8367 | 0.8368 | | 0.2716 | 80.0 | 1200 | 0.4111 | 0.8452 | 0.8452 | | 0.2493 | 93.33 | 1400 | 0.4071 | 0.8494 | 0.8494 | | 0.2272 | 106.67 | 1600 | 0.4158 | 0.8536 | 0.8536 | | 0.2063 | 120.0 | 1800 | 0.4486 | 0.8577 | 0.8577 | | 0.1976 | 133.33 | 2000 | 0.4577 | 0.8703 | 0.8703 | | 0.1834 | 146.67 | 2200 | 0.4825 | 0.8410 | 0.8410 | | 0.1666 | 160.0 | 2400 | 0.5210 | 0.8242 | 0.8243 | | 0.1606 | 173.33 | 2600 | 0.5225 | 0.8492 | 0.8494 | | 0.1521 | 186.67 | 2800 | 0.5313 | 0.8452 | 0.8452 | | 0.1472 | 200.0 | 3000 | 0.5453 | 0.8410 | 0.8410 | | 0.1404 | 213.33 | 3200 | 0.5693 | 0.8367 | 0.8368 | | 0.1352 | 226.67 | 3400 | 0.5634 | 0.8368 | 0.8368 | | 0.1282 | 240.0 | 3600 | 0.5961 | 0.8241 | 0.8243 | | 0.1208 | 253.33 | 3800 | 0.6403 | 0.8240 | 0.8243 | | 0.1195 | 266.67 | 4000 | 0.6082 | 0.8200 | 0.8201 | | 0.1112 | 280.0 | 4200 | 0.6709 | 0.8284 | 0.8285 | | 0.1079 | 293.33 | 4400 | 0.6780 | 0.8284 | 0.8285 | | 0.1079 | 306.67 | 4600 | 0.6618 | 0.8408 | 0.8410 | | 0.1052 | 320.0 | 4800 | 0.6600 | 0.8409 | 0.8410 | | 0.1008 | 333.33 | 5000 | 0.6764 | 0.8452 | 0.8452 | | 0.0994 | 346.67 | 5200 | 0.7030 | 0.8284 | 0.8285 | | 0.0993 | 360.0 | 5400 | 0.6886 | 0.8243 | 0.8243 | | 0.097 | 373.33 | 5600 | 0.6909 | 0.8326 | 0.8326 | | 0.0938 | 386.67 | 5800 | 0.6842 | 0.8326 | 0.8326 | | 0.0871 | 400.0 | 6000 | 0.7277 | 0.8326 | 0.8326 | | 0.0864 | 413.33 | 6200 | 0.7443 | 0.8368 | 0.8368 | | 0.088 | 426.67 | 6400 | 0.7257 | 0.8368 | 0.8368 | | 0.0883 | 440.0 | 6600 | 0.7210 | 0.8326 | 0.8326 | | 0.085 | 453.33 | 6800 | 0.7380 | 0.8240 | 0.8243 | | 0.0853 | 466.67 | 7000 | 0.7352 | 0.8198 | 0.8201 | | 0.0793 | 480.0 | 7200 | 0.7687 | 0.8201 | 0.8201 | | 0.082 | 493.33 | 7400 | 0.7717 | 0.8284 | 0.8285 | | 0.0776 | 506.67 | 7600 | 0.7794 | 0.8159 | 0.8159 | | 0.08 | 520.0 | 7800 | 0.7773 | 0.8284 | 0.8285 | | 0.0803 | 533.33 | 8000 | 0.7670 | 0.8200 | 0.8201 | | 0.0816 | 546.67 | 8200 | 0.7660 | 0.8241 | 0.8243 | | 0.0768 | 560.0 | 8400 | 0.7663 | 0.8284 | 0.8285 | | 0.0805 | 573.33 | 8600 | 0.7833 | 0.8201 | 0.8201 | | 0.0748 | 586.67 | 8800 | 0.7937 | 0.8326 | 0.8326 | | 0.0753 | 600.0 | 9000 | 0.7866 | 0.8241 | 0.8243 | | 0.0748 | 613.33 | 9200 | 0.7897 | 0.8326 | 0.8326 | | 0.0736 | 626.67 | 9400 | 0.7886 | 0.8326 | 0.8326 | | 0.0742 | 640.0 | 9600 | 0.7887 | 0.8326 | 0.8326 | | 0.0759 | 653.33 | 9800 | 0.7869 | 0.8326 | 0.8326 | | 0.0727 | 666.67 | 10000 | 0.7866 | 0.8368 | 0.8368 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_3-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_3-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:42:35+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_mouse\_3-seqsight\_8192\_512\_30M-L1\_f ============================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset. It achieves the following results on the evaluation set: * Loss: 0.5379 * F1 Score: 0.8535 * Accuracy: 0.8536 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
null
LIne2ColorID Lora for sd 1.5! This is experimental Lora, which generates images in a similar style to Color ID. Because it was trained with anime images, it doesn't work well with photorealistic models. You can use it in conjunction with Lineart Controlnet. Add the following prompts black background, colorid, green hair, blue cloth, red skin, orange face, yellow eyes Hair: Green, Skin: Red, Clothes: Blue, Eyes: Yellow, Face: Orange ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f2b20aeb9e8a5f05cf9a9d/n3fcGEjSSbRm71i5v7Y62.png) <video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/62f2b20aeb9e8a5f05cf9a9d/L5L_CZDgc2rpatYyLzJZ8.mp4"></video>
{}
toyxyz/LIne2ColorID
null
[ "region:us" ]
null
2024-04-27T05:42:47+00:00
[]
[]
TAGS #region-us
LIne2ColorID Lora for sd 1.5! This is experimental Lora, which generates images in a similar style to Color ID. Because it was trained with anime images, it doesn't work well with photorealistic models. You can use it in conjunction with Lineart Controlnet. Add the following prompts black background, colorid, green hair, blue cloth, red skin, orange face, yellow eyes Hair: Green, Skin: Red, Clothes: Blue, Eyes: Yellow, Face: Orange !image/png <video controls autoplay src="URL
[]
[ "TAGS\n#region-us \n" ]
text-generation
transformers
# llama-3-8b-instruct-262k-chinese llama-3-8b-instruct-262k-chinese基于[Llama-3-8B-Instruct-262k](https://huggingface.co/gradientai/Llama-3-8B-Instruct-262k),使用ORPO方法,在中英文偏好数据集[shibing624/DPO-En-Zh-20k-Preference](https://huggingface.co/datasets/shibing624/DPO-En-Zh-20k-Preference) 上微调得到的对话模型。 模型的部署、训练等方法详见MedicalGPT的GitHub仓库:[https://github.com/shibing624/MedicalGPT](https://github.com/shibing624/MedicalGPT) ## Relate models - 完整模型权重:https://huggingface.co/shibing624/llama-3-8b-instruct-262k-chinese - lora权重:https://huggingface.co/shibing624/llama-3-8b-instruct-262k-chinese-lora ## Features 模型优势: 1. 支持超长context length 262k token,适合RAG 2. 支持中英文 3. 支持多轮对话,代码编码、推理能力强,英文知识充分 4. 模型推理需要显存: Quantization | Peak Usage for Encoding 2048 Tokens | Peak Usage for Generating 8192 Tokens -- | -- | -- FP16/BF16 | 18.66GB | 24.58GB Int4 | 9.21GB | 14.62GB 缺点: 1. model size只有8B,知识类问答幻觉明显 2. 中文知识欠缺,容易幻觉,特别是中文古文知识,属于llama类模型通病 ## 如何使用 ```python import transformers import torch model_id = "shibing624/llama-3-8b-instruct-262k-chinese" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.float16}, device="cuda", ) messages = [{"role": "system", "content": ""}] messages.append({"role": "user", "content": "介绍一下机器学习"}) prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=512, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9 ) content = outputs[0]["generated_text"][len(prompt):] print(content) ``` result: ```shell 机器学习(Machine Learning)是一种基于计算机算法的自动数据分析技术,用于从数据中学习并预测未来的结果。它是人工智能(AI)和数据挖掘(Data Mining)的子领域,旨在通过训练和调整算法来发现数据中的模式、关系和规律。 机器学习算法可以分为监督学习、无监督学习和半监督学习三类: 1. 监督学习(Supervised Learning):在这种类型的学习中,算法被提供带有标签的数据集,用于训练。算法学习如何将输入数据映射到输出数据,并在新数据上进行预测。常见的监督学习算法包括逻辑回归、决策树、支持向量机(SVM)、随机森林和神经网络。 2. 无监督学习(Unsupervised Learning):在这种类型的学习中,算法没有标签数据。算法学习数据中的模式、结构和关系,并可能发现新的数据集群或特征。常见的无监督学习算法包括聚类、主成分分析(PCA)、独立成分分析(ICA)和高维度数据降维。 3. 半监督学习(Semi-supervised Learning):在这种类型的学习中,算法被提供部分带有标签的数据集。算法学习如何将输入数据映射到输出数据,并在新数据上进行预测。半监督学习算法结合了监督学习和无监督学习的优点,常见的半监督学习算法包括自我标注(Self-Labeling)和基于图的半监督学习(Graph-based Semi-supervised Learning)。 机器学习的应用广泛,包括自然语言处理、计算机视觉、推荐系统、人工智能和自动驾驶等领域。它的优势包括: 1. 自动化:机器学习算法可以自动从数据中发现模式和关系,无需人为干预。 2. 高效性:机器学习算法可以处理大量数据,并且可以在不需要人为干预的情况下进行预测。 3. 适应性:机器学习算法可以根据数据集的变化和更新进行调整。 4. 精准性:机器学习算法可以通过训练和测试来提高预测的准确性。 ``` ## train detail train loss: <img src="https://huggingface.co/shibing624/llama-3-8b-instruct-262k-chinese/raw/main/train_lossv2.svg" width="600"> eval loss: <img src="https://huggingface.co/shibing624/llama-3-8b-instruct-262k-chinese/raw/main/eval_lossv2.svg" width="600"> # About Llama-3-8B-Instruct-262k Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. To learn more or collaborate on a custom model. This model extends LLama-3 8B's context length from 8k to -> 160K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training (< 200M tokens) by appropriately adjusting RoPE theta. <img src="https://cdn-uploads.huggingface.co/production/uploads/6585dc9be92bc5f258156bd6/hiHWva3CbsrnPvZTp5-lu.png" width="600"> **Approach:** - [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base - NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by a new data-driven RoPE theta optimization technique - Progressive training on increasing context lengths similar to the [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below) **Infra:** We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 262144 tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster. **Data:** For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). **Progressive Training Details:** | Parameter | 65K | 262K | |-----------------------------|----------------|------------| | Initialize From | LLaMA-3-8B-Inst| 65K | | Sequence Length | 2^16 | 2^18 | | RoPE theta | 15.3 M | 207.1 M | | Batch Size (Tokens / Step) | 2.097 M | 4.192 M | | Steps | 30 | 24 | | Total Tokens | 63 M | 101 M | | Learning Rate | 2.00E-05 | 2.00E-05 | | # GPUs | 32 | 32 | | GPU Type | NVIDIA L40S | NVIDIA L40S|
{"language": ["zh", "en"], "license": "other", "tags": ["llama3", "chinese", "meta"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE"}
shibing624/llama-3-8b-instruct-262k-chinese
null
[ "transformers", "safetensors", "llama", "text-generation", "llama3", "chinese", "meta", "conversational", "zh", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T05:44:17+00:00
[]
[ "zh", "en" ]
TAGS #transformers #safetensors #llama #text-generation #llama3 #chinese #meta #conversational #zh #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
llama-3-8b-instruct-262k-chinese ================================ llama-3-8b-instruct-262k-chinese基于Llama-3-8B-Instruct-262k,使用ORPO方法,在中英文偏好数据集shibing624/DPO-En-Zh-20k-Preference 上微调得到的对话模型。 模型的部署、训练等方法详见MedicalGPT的GitHub仓库:URL Relate models ------------- * 完整模型权重:URL * lora权重:URL Features -------- 模型优势: 1. 支持超长context length 262k token,适合RAG 2. 支持中英文 3. 支持多轮对话,代码编码、推理能力强,英文知识充分 4. 模型推理需要显存: Quantization: FP16/BF16, Peak Usage for Encoding 2048 Tokens: 18.66GB, Peak Usage for Generating 8192 Tokens: 24.58GB Quantization: Int4, Peak Usage for Encoding 2048 Tokens: 9.21GB, Peak Usage for Generating 8192 Tokens: 14.62GB 缺点: 1. model size只有8B,知识类问答幻觉明显 2. 中文知识欠缺,容易幻觉,特别是中文古文知识,属于llama类模型通病 如何使用 ---- result: train detail ------------ train loss: <img src="URL width="600"> eval loss: <img src="URL width="600"> About Llama-3-8B-Instruct-262k ============================== Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. To learn more or collaborate on a custom model. This model extends LLama-3 8B's context length from 8k to -> 160K, developed by Gradient, sponsored by compute from Crusoe Energy. It demonstrates that SOTA LLMs can learn to operate on long context with minimal training (< 200M tokens) by appropriately adjusting RoPE theta. <img src="URL width="600"> Approach: * meta-llama/Meta-Llama-3-8B-Instruct as the base * NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by a new data-driven RoPE theta optimization technique * Progressive training on increasing context lengths similar to the Large World Model [2] (See details below) Infra: We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 262144 tokens on Crusoe Energy high performance L40S cluster. Data: For training data, we generate long contexts by augmenting SlimPajama. Progressive Training Details: Parameter: Initialize From, 65K: LLaMA-3-8B-Inst, 262K: 65K Parameter: Sequence Length, 65K: 2^16, 262K: 2^18 Parameter: RoPE theta, 65K: 15.3 M, 262K: 207.1 M Parameter: Batch Size (Tokens / Step), 65K: 2.097 M, 262K: 4.192 M Parameter: Steps, 65K: 30, 262K: 24 Parameter: Total Tokens, 65K: 63 M, 262K: 101 M Parameter: Learning Rate, 65K: 2.00E-05, 262K: 2.00E-05 Parameter: # GPUs, 65K: 32, 262K: 32 Parameter: GPU Type, 65K: NVIDIA L40S, 262K: NVIDIA L40S
[ "# GPUs, 65K: 32, 262K: 32\nParameter: GPU Type, 65K: NVIDIA L40S, 262K: NVIDIA L40S" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #llama3 #chinese #meta #conversational #zh #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# GPUs, 65K: 32, 262K: 32\nParameter: GPU Type, 65K: NVIDIA L40S, 262K: NVIDIA L40S" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_3-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset. It achieves the following results on the evaluation set: - Loss: 1.1473 - F1 Score: 0.8409 - Accuracy: 0.8410 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.4803 | 13.33 | 200 | 0.3543 | 0.8368 | 0.8368 | | 0.2249 | 26.67 | 400 | 0.4923 | 0.8410 | 0.8410 | | 0.1241 | 40.0 | 600 | 0.7904 | 0.7980 | 0.7992 | | 0.0702 | 53.33 | 800 | 0.9285 | 0.8074 | 0.8075 | | 0.0509 | 66.67 | 1000 | 0.8517 | 0.8152 | 0.8159 | | 0.0349 | 80.0 | 1200 | 0.9121 | 0.8242 | 0.8243 | | 0.0262 | 93.33 | 1400 | 0.9590 | 0.8243 | 0.8243 | | 0.0264 | 106.67 | 1600 | 0.9886 | 0.8410 | 0.8410 | | 0.0177 | 120.0 | 1800 | 1.0063 | 0.8284 | 0.8285 | | 0.013 | 133.33 | 2000 | 1.2040 | 0.8368 | 0.8368 | | 0.0162 | 146.67 | 2200 | 1.1041 | 0.8533 | 0.8536 | | 0.013 | 160.0 | 2400 | 1.2578 | 0.8159 | 0.8159 | | 0.0138 | 173.33 | 2600 | 0.9836 | 0.8452 | 0.8452 | | 0.0093 | 186.67 | 2800 | 1.1183 | 0.8368 | 0.8368 | | 0.0101 | 200.0 | 3000 | 1.0961 | 0.8452 | 0.8452 | | 0.0111 | 213.33 | 3200 | 0.9007 | 0.8577 | 0.8577 | | 0.0094 | 226.67 | 3400 | 1.0733 | 0.8408 | 0.8410 | | 0.0103 | 240.0 | 3600 | 1.0371 | 0.8243 | 0.8243 | | 0.0042 | 253.33 | 3800 | 1.1633 | 0.8368 | 0.8368 | | 0.009 | 266.67 | 4000 | 1.0699 | 0.8452 | 0.8452 | | 0.0073 | 280.0 | 4200 | 1.1294 | 0.8450 | 0.8452 | | 0.0053 | 293.33 | 4400 | 1.3100 | 0.8452 | 0.8452 | | 0.005 | 306.67 | 4600 | 1.2680 | 0.8408 | 0.8410 | | 0.0064 | 320.0 | 4800 | 1.0098 | 0.8493 | 0.8494 | | 0.0048 | 333.33 | 5000 | 1.2811 | 0.8450 | 0.8452 | | 0.0039 | 346.67 | 5200 | 1.3538 | 0.8284 | 0.8285 | | 0.0056 | 360.0 | 5400 | 1.3837 | 0.8367 | 0.8368 | | 0.0034 | 373.33 | 5600 | 1.5433 | 0.8198 | 0.8201 | | 0.004 | 386.67 | 5800 | 1.3904 | 0.8284 | 0.8285 | | 0.0033 | 400.0 | 6000 | 1.3728 | 0.8075 | 0.8075 | | 0.0045 | 413.33 | 6200 | 1.4619 | 0.8367 | 0.8368 | | 0.0044 | 426.67 | 6400 | 1.2779 | 0.8285 | 0.8285 | | 0.0027 | 440.0 | 6600 | 1.2879 | 0.8324 | 0.8326 | | 0.0033 | 453.33 | 6800 | 1.2179 | 0.8494 | 0.8494 | | 0.0015 | 466.67 | 7000 | 1.3028 | 0.8280 | 0.8285 | | 0.0026 | 480.0 | 7200 | 1.3398 | 0.8280 | 0.8285 | | 0.002 | 493.33 | 7400 | 1.2803 | 0.8452 | 0.8452 | | 0.0014 | 506.67 | 7600 | 1.3104 | 0.8408 | 0.8410 | | 0.003 | 520.0 | 7800 | 1.3562 | 0.8451 | 0.8452 | | 0.0021 | 533.33 | 8000 | 1.3905 | 0.8243 | 0.8243 | | 0.0018 | 546.67 | 8200 | 1.4232 | 0.8285 | 0.8285 | | 0.0016 | 560.0 | 8400 | 1.4825 | 0.8280 | 0.8285 | | 0.0021 | 573.33 | 8600 | 1.3714 | 0.8451 | 0.8452 | | 0.0019 | 586.67 | 8800 | 1.4865 | 0.8325 | 0.8326 | | 0.0023 | 600.0 | 9000 | 1.3422 | 0.8326 | 0.8326 | | 0.0019 | 613.33 | 9200 | 1.3684 | 0.8368 | 0.8368 | | 0.0009 | 626.67 | 9400 | 1.4483 | 0.8326 | 0.8326 | | 0.0011 | 640.0 | 9600 | 1.4090 | 0.8410 | 0.8410 | | 0.0012 | 653.33 | 9800 | 1.4079 | 0.8451 | 0.8452 | | 0.0008 | 666.67 | 10000 | 1.4164 | 0.8451 | 0.8452 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_3-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_3-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:44:40+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_mouse\_3-seqsight\_8192\_512\_30M-L32\_f ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset. It achieves the following results on the evaluation set: * Loss: 1.1473 * F1 Score: 0.8409 * Accuracy: 0.8410 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_3-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4271 - F1 Score: 0.8532 - Accuracy: 0.8536 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5526 | 13.33 | 200 | 0.3903 | 0.8075 | 0.8075 | | 0.3247 | 26.67 | 400 | 0.3813 | 0.8493 | 0.8494 | | 0.229 | 40.0 | 600 | 0.4402 | 0.8326 | 0.8326 | | 0.1767 | 53.33 | 800 | 0.5199 | 0.8451 | 0.8452 | | 0.1331 | 66.67 | 1000 | 0.6064 | 0.8325 | 0.8326 | | 0.1045 | 80.0 | 1200 | 0.6995 | 0.8409 | 0.8410 | | 0.0923 | 93.33 | 1400 | 0.6936 | 0.8198 | 0.8201 | | 0.0705 | 106.67 | 1600 | 0.7835 | 0.8324 | 0.8326 | | 0.0617 | 120.0 | 1800 | 0.8372 | 0.8075 | 0.8075 | | 0.0526 | 133.33 | 2000 | 0.8845 | 0.8197 | 0.8201 | | 0.0463 | 146.67 | 2200 | 0.9266 | 0.8116 | 0.8117 | | 0.0421 | 160.0 | 2400 | 1.0798 | 0.8321 | 0.8326 | | 0.0362 | 173.33 | 2600 | 1.0632 | 0.8235 | 0.8243 | | 0.0321 | 186.67 | 2800 | 1.1024 | 0.8155 | 0.8159 | | 0.0316 | 200.0 | 3000 | 1.0857 | 0.8194 | 0.8201 | | 0.0291 | 213.33 | 3200 | 1.0118 | 0.8241 | 0.8243 | | 0.0264 | 226.67 | 3400 | 1.0152 | 0.8116 | 0.8117 | | 0.0245 | 240.0 | 3600 | 1.0778 | 0.8159 | 0.8159 | | 0.0192 | 253.33 | 3800 | 1.2326 | 0.8281 | 0.8285 | | 0.02 | 266.67 | 4000 | 1.1461 | 0.8241 | 0.8243 | | 0.0211 | 280.0 | 4200 | 1.1157 | 0.8325 | 0.8326 | | 0.0202 | 293.33 | 4400 | 1.1613 | 0.8201 | 0.8201 | | 0.0168 | 306.67 | 4600 | 1.2245 | 0.8282 | 0.8285 | | 0.0144 | 320.0 | 4800 | 1.1559 | 0.8325 | 0.8326 | | 0.0151 | 333.33 | 5000 | 1.2483 | 0.8364 | 0.8368 | | 0.015 | 346.67 | 5200 | 1.2253 | 0.8326 | 0.8326 | | 0.0148 | 360.0 | 5400 | 1.2649 | 0.8284 | 0.8285 | | 0.0134 | 373.33 | 5600 | 1.2890 | 0.8285 | 0.8285 | | 0.0155 | 386.67 | 5800 | 1.2662 | 0.8326 | 0.8326 | | 0.0115 | 400.0 | 6000 | 1.3286 | 0.8326 | 0.8326 | | 0.0116 | 413.33 | 6200 | 1.3486 | 0.8324 | 0.8326 | | 0.0119 | 426.67 | 6400 | 1.2944 | 0.8241 | 0.8243 | | 0.0112 | 440.0 | 6600 | 1.2818 | 0.8326 | 0.8326 | | 0.013 | 453.33 | 6800 | 1.2444 | 0.8368 | 0.8368 | | 0.0079 | 466.67 | 7000 | 1.2534 | 0.8284 | 0.8285 | | 0.0094 | 480.0 | 7200 | 1.3682 | 0.8448 | 0.8452 | | 0.0088 | 493.33 | 7400 | 1.3350 | 0.8284 | 0.8285 | | 0.0081 | 506.67 | 7600 | 1.3950 | 0.8366 | 0.8368 | | 0.0092 | 520.0 | 7800 | 1.3067 | 0.8326 | 0.8326 | | 0.0087 | 533.33 | 8000 | 1.3583 | 0.8326 | 0.8326 | | 0.0094 | 546.67 | 8200 | 1.4055 | 0.8408 | 0.8410 | | 0.008 | 560.0 | 8400 | 1.3319 | 0.8368 | 0.8368 | | 0.0071 | 573.33 | 8600 | 1.3699 | 0.8326 | 0.8326 | | 0.0074 | 586.67 | 8800 | 1.4303 | 0.8324 | 0.8326 | | 0.0073 | 600.0 | 9000 | 1.3714 | 0.8326 | 0.8326 | | 0.0081 | 613.33 | 9200 | 1.3644 | 0.8284 | 0.8285 | | 0.0067 | 626.67 | 9400 | 1.3521 | 0.8325 | 0.8326 | | 0.007 | 640.0 | 9600 | 1.3531 | 0.8325 | 0.8326 | | 0.006 | 653.33 | 9800 | 1.3745 | 0.8283 | 0.8285 | | 0.0067 | 666.67 | 10000 | 1.3686 | 0.8283 | 0.8285 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_3-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_3-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:45:08+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_mouse\_3-seqsight\_8192\_512\_30M-L8\_f ============================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset. It achieves the following results on the evaluation set: * Loss: 0.4271 * F1 Score: 0.8532 * Accuracy: 0.8536 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_2-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.4323 - F1 Score: 0.8749 - Accuracy: 0.875 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.4476 | 9.52 | 200 | 0.3555 | 0.8108 | 0.8110 | | 0.3206 | 19.05 | 400 | 0.3355 | 0.8413 | 0.8415 | | 0.2895 | 28.57 | 600 | 0.3284 | 0.8536 | 0.8537 | | 0.2628 | 38.1 | 800 | 0.3192 | 0.8598 | 0.8598 | | 0.2341 | 47.62 | 1000 | 0.3126 | 0.8506 | 0.8506 | | 0.2149 | 57.14 | 1200 | 0.3150 | 0.8689 | 0.8689 | | 0.1954 | 66.67 | 1400 | 0.3327 | 0.8658 | 0.8659 | | 0.1826 | 76.19 | 1600 | 0.3650 | 0.8625 | 0.8628 | | 0.1651 | 85.71 | 1800 | 0.3472 | 0.8627 | 0.8628 | | 0.1523 | 95.24 | 2000 | 0.3714 | 0.8597 | 0.8598 | | 0.144 | 104.76 | 2200 | 0.3890 | 0.8596 | 0.8598 | | 0.136 | 114.29 | 2400 | 0.4043 | 0.8687 | 0.8689 | | 0.1308 | 123.81 | 2600 | 0.4138 | 0.8718 | 0.8720 | | 0.1243 | 133.33 | 2800 | 0.4041 | 0.8718 | 0.8720 | | 0.1185 | 142.86 | 3000 | 0.4698 | 0.8687 | 0.8689 | | 0.1142 | 152.38 | 3200 | 0.4658 | 0.8778 | 0.8780 | | 0.106 | 161.9 | 3400 | 0.4865 | 0.8778 | 0.8780 | | 0.1041 | 171.43 | 3600 | 0.4803 | 0.8809 | 0.8811 | | 0.0929 | 180.95 | 3800 | 0.5408 | 0.8746 | 0.875 | | 0.0951 | 190.48 | 4000 | 0.4773 | 0.8780 | 0.8780 | | 0.0911 | 200.0 | 4200 | 0.5256 | 0.8778 | 0.8780 | | 0.0887 | 209.52 | 4400 | 0.5495 | 0.8778 | 0.8780 | | 0.0843 | 219.05 | 4600 | 0.5791 | 0.8623 | 0.8628 | | 0.0861 | 228.57 | 4800 | 0.5309 | 0.8809 | 0.8811 | | 0.0803 | 238.1 | 5000 | 0.5498 | 0.8778 | 0.8780 | | 0.0752 | 247.62 | 5200 | 0.6053 | 0.8715 | 0.8720 | | 0.0743 | 257.14 | 5400 | 0.5967 | 0.8685 | 0.8689 | | 0.0765 | 266.67 | 5600 | 0.5486 | 0.8778 | 0.8780 | | 0.0768 | 276.19 | 5800 | 0.5428 | 0.8778 | 0.8780 | | 0.0718 | 285.71 | 6000 | 0.5733 | 0.8778 | 0.8780 | | 0.0696 | 295.24 | 6200 | 0.5869 | 0.8778 | 0.8780 | | 0.0664 | 304.76 | 6400 | 0.5818 | 0.8809 | 0.8811 | | 0.0668 | 314.29 | 6600 | 0.6055 | 0.8777 | 0.8780 | | 0.0624 | 323.81 | 6800 | 0.6224 | 0.8777 | 0.8780 | | 0.0659 | 333.33 | 7000 | 0.5996 | 0.8778 | 0.8780 | | 0.0631 | 342.86 | 7200 | 0.5962 | 0.8748 | 0.875 | | 0.0605 | 352.38 | 7400 | 0.6277 | 0.8717 | 0.8720 | | 0.0588 | 361.9 | 7600 | 0.6448 | 0.8716 | 0.8720 | | 0.0575 | 371.43 | 7800 | 0.6577 | 0.8684 | 0.8689 | | 0.0582 | 380.95 | 8000 | 0.6353 | 0.8717 | 0.8720 | | 0.0603 | 390.48 | 8200 | 0.6436 | 0.8715 | 0.8720 | | 0.0597 | 400.0 | 8400 | 0.6446 | 0.8683 | 0.8689 | | 0.0619 | 409.52 | 8600 | 0.6040 | 0.8747 | 0.875 | | 0.0538 | 419.05 | 8800 | 0.6475 | 0.8714 | 0.8720 | | 0.0543 | 428.57 | 9000 | 0.6480 | 0.8715 | 0.8720 | | 0.0533 | 438.1 | 9200 | 0.6366 | 0.8716 | 0.8720 | | 0.0588 | 447.62 | 9400 | 0.6348 | 0.8716 | 0.8720 | | 0.0522 | 457.14 | 9600 | 0.6399 | 0.8716 | 0.8720 | | 0.0543 | 466.67 | 9800 | 0.6409 | 0.8716 | 0.8720 | | 0.0535 | 476.19 | 10000 | 0.6396 | 0.8716 | 0.8720 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_2-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_2-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:45:12+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_mouse\_2-seqsight\_8192\_512\_30M-L1\_f ============================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset. It achieves the following results on the evaluation set: * Loss: 0.4323 * F1 Score: 0.8749 * Accuracy: 0.875 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_2-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.7217 - F1 Score: 0.8810 - Accuracy: 0.8811 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.3982 | 9.52 | 200 | 0.3262 | 0.8505 | 0.8506 | | 0.2581 | 19.05 | 400 | 0.2917 | 0.8841 | 0.8841 | | 0.1954 | 28.57 | 600 | 0.3037 | 0.8750 | 0.875 | | 0.1537 | 38.1 | 800 | 0.3400 | 0.8750 | 0.875 | | 0.1215 | 47.62 | 1000 | 0.3925 | 0.8902 | 0.8902 | | 0.0994 | 57.14 | 1200 | 0.4933 | 0.8809 | 0.8811 | | 0.0788 | 66.67 | 1400 | 0.5644 | 0.8777 | 0.8780 | | 0.071 | 76.19 | 1600 | 0.5420 | 0.8748 | 0.875 | | 0.0562 | 85.71 | 1800 | 0.5823 | 0.8902 | 0.8902 | | 0.0485 | 95.24 | 2000 | 0.6354 | 0.8870 | 0.8872 | | 0.0403 | 104.76 | 2200 | 0.6703 | 0.8780 | 0.8780 | | 0.0389 | 114.29 | 2400 | 0.6109 | 0.8839 | 0.8841 | | 0.036 | 123.81 | 2600 | 0.5863 | 0.8871 | 0.8872 | | 0.0317 | 133.33 | 2800 | 0.6698 | 0.8748 | 0.875 | | 0.0322 | 142.86 | 3000 | 0.6769 | 0.8687 | 0.8689 | | 0.0297 | 152.38 | 3200 | 0.6483 | 0.8902 | 0.8902 | | 0.0231 | 161.9 | 3400 | 0.7186 | 0.8685 | 0.8689 | | 0.0238 | 171.43 | 3600 | 0.7712 | 0.8779 | 0.8780 | | 0.0201 | 180.95 | 3800 | 0.7197 | 0.8871 | 0.8872 | | 0.0189 | 190.48 | 4000 | 0.7338 | 0.8811 | 0.8811 | | 0.0189 | 200.0 | 4200 | 0.7400 | 0.8809 | 0.8811 | | 0.018 | 209.52 | 4400 | 0.7246 | 0.8809 | 0.8811 | | 0.0163 | 219.05 | 4600 | 0.7142 | 0.8809 | 0.8811 | | 0.0178 | 228.57 | 4800 | 0.7087 | 0.8872 | 0.8872 | | 0.0124 | 238.1 | 5000 | 0.8295 | 0.8839 | 0.8841 | | 0.0107 | 247.62 | 5200 | 0.9201 | 0.8746 | 0.875 | | 0.0126 | 257.14 | 5400 | 0.8516 | 0.8808 | 0.8811 | | 0.0123 | 266.67 | 5600 | 0.7599 | 0.8871 | 0.8872 | | 0.0118 | 276.19 | 5800 | 0.7666 | 0.8933 | 0.8933 | | 0.0109 | 285.71 | 6000 | 0.7882 | 0.8840 | 0.8841 | | 0.0091 | 295.24 | 6200 | 0.8149 | 0.8871 | 0.8872 | | 0.0105 | 304.76 | 6400 | 0.7243 | 0.8963 | 0.8963 | | 0.0111 | 314.29 | 6600 | 0.8182 | 0.8899 | 0.8902 | | 0.0089 | 323.81 | 6800 | 0.8178 | 0.8901 | 0.8902 | | 0.0107 | 333.33 | 7000 | 0.7995 | 0.8902 | 0.8902 | | 0.0082 | 342.86 | 7200 | 0.8293 | 0.8871 | 0.8872 | | 0.01 | 352.38 | 7400 | 0.7445 | 0.8933 | 0.8933 | | 0.0088 | 361.9 | 7600 | 0.7924 | 0.8901 | 0.8902 | | 0.0075 | 371.43 | 7800 | 0.8247 | 0.8870 | 0.8872 | | 0.0076 | 380.95 | 8000 | 0.8026 | 0.8841 | 0.8841 | | 0.0074 | 390.48 | 8200 | 0.8535 | 0.8809 | 0.8811 | | 0.0071 | 400.0 | 8400 | 0.8746 | 0.8839 | 0.8841 | | 0.0069 | 409.52 | 8600 | 0.8075 | 0.8902 | 0.8902 | | 0.0054 | 419.05 | 8800 | 0.8182 | 0.8871 | 0.8872 | | 0.0067 | 428.57 | 9000 | 0.8328 | 0.8809 | 0.8811 | | 0.0068 | 438.1 | 9200 | 0.8452 | 0.8809 | 0.8811 | | 0.0059 | 447.62 | 9400 | 0.8438 | 0.8840 | 0.8841 | | 0.0059 | 457.14 | 9600 | 0.8414 | 0.8840 | 0.8841 | | 0.0061 | 466.67 | 9800 | 0.8342 | 0.8809 | 0.8811 | | 0.0054 | 476.19 | 10000 | 0.8414 | 0.8840 | 0.8841 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_2-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_2-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:45:29+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_mouse\_2-seqsight\_8192\_512\_30M-L8\_f ============================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset. It achieves the following results on the evaluation set: * Loss: 0.7217 * F1 Score: 0.8810 * Accuracy: 0.8811 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_2-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.6645 - F1 Score: 0.8811 - Accuracy: 0.8811 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.3616 | 9.52 | 200 | 0.3021 | 0.8536 | 0.8537 | | 0.2006 | 19.05 | 400 | 0.3648 | 0.8678 | 0.8689 | | 0.1313 | 28.57 | 600 | 0.3987 | 0.8840 | 0.8841 | | 0.0878 | 38.1 | 800 | 0.4055 | 0.9054 | 0.9055 | | 0.0608 | 47.62 | 1000 | 0.4380 | 0.8902 | 0.8902 | | 0.0396 | 57.14 | 1200 | 0.5832 | 0.8993 | 0.8994 | | 0.0329 | 66.67 | 1400 | 0.5412 | 0.8841 | 0.8841 | | 0.0297 | 76.19 | 1600 | 0.5713 | 0.8900 | 0.8902 | | 0.0251 | 85.71 | 1800 | 0.6235 | 0.8870 | 0.8872 | | 0.0175 | 95.24 | 2000 | 0.6229 | 0.8932 | 0.8933 | | 0.0146 | 104.76 | 2200 | 0.5887 | 0.9054 | 0.9055 | | 0.0177 | 114.29 | 2400 | 0.5519 | 0.8901 | 0.8902 | | 0.0119 | 123.81 | 2600 | 0.6173 | 0.8872 | 0.8872 | | 0.0113 | 133.33 | 2800 | 0.6440 | 0.8933 | 0.8933 | | 0.0121 | 142.86 | 3000 | 0.5785 | 0.8963 | 0.8963 | | 0.0091 | 152.38 | 3200 | 0.6040 | 0.8962 | 0.8963 | | 0.0081 | 161.9 | 3400 | 0.6695 | 0.8930 | 0.8933 | | 0.0094 | 171.43 | 3600 | 0.5808 | 0.9207 | 0.9207 | | 0.0055 | 180.95 | 3800 | 0.6948 | 0.8993 | 0.8994 | | 0.007 | 190.48 | 4000 | 0.7483 | 0.9115 | 0.9116 | | 0.0072 | 200.0 | 4200 | 0.6142 | 0.9054 | 0.9055 | | 0.005 | 209.52 | 4400 | 0.7102 | 0.8993 | 0.8994 | | 0.007 | 219.05 | 4600 | 0.5958 | 0.8870 | 0.8872 | | 0.0056 | 228.57 | 4800 | 0.6067 | 0.9085 | 0.9085 | | 0.0042 | 238.1 | 5000 | 0.7074 | 0.8901 | 0.8902 | | 0.0038 | 247.62 | 5200 | 0.7191 | 0.8991 | 0.8994 | | 0.0045 | 257.14 | 5400 | 0.5924 | 0.9116 | 0.9116 | | 0.0037 | 266.67 | 5600 | 0.6330 | 0.9055 | 0.9055 | | 0.0031 | 276.19 | 5800 | 0.6398 | 0.9023 | 0.9024 | | 0.0045 | 285.71 | 6000 | 0.6891 | 0.8993 | 0.8994 | | 0.0027 | 295.24 | 6200 | 0.7027 | 0.9177 | 0.9177 | | 0.0033 | 304.76 | 6400 | 0.7020 | 0.9054 | 0.9055 | | 0.003 | 314.29 | 6600 | 0.7121 | 0.8993 | 0.8994 | | 0.0026 | 323.81 | 6800 | 0.7751 | 0.8963 | 0.8963 | | 0.0025 | 333.33 | 7000 | 0.7348 | 0.9085 | 0.9085 | | 0.0018 | 342.86 | 7200 | 0.7936 | 0.9055 | 0.9055 | | 0.0028 | 352.38 | 7400 | 0.7236 | 0.9055 | 0.9055 | | 0.0026 | 361.9 | 7600 | 0.6501 | 0.9054 | 0.9055 | | 0.0022 | 371.43 | 7800 | 0.6888 | 0.9085 | 0.9085 | | 0.0017 | 380.95 | 8000 | 0.6895 | 0.9055 | 0.9055 | | 0.0018 | 390.48 | 8200 | 0.7289 | 0.9116 | 0.9116 | | 0.0014 | 400.0 | 8400 | 0.7563 | 0.9085 | 0.9085 | | 0.0016 | 409.52 | 8600 | 0.7084 | 0.9116 | 0.9116 | | 0.0013 | 419.05 | 8800 | 0.7590 | 0.9085 | 0.9085 | | 0.0009 | 428.57 | 9000 | 0.7604 | 0.9116 | 0.9116 | | 0.001 | 438.1 | 9200 | 0.7578 | 0.9055 | 0.9055 | | 0.0015 | 447.62 | 9400 | 0.7548 | 0.9116 | 0.9116 | | 0.0007 | 457.14 | 9600 | 0.7872 | 0.8993 | 0.8994 | | 0.0006 | 466.67 | 9800 | 0.7643 | 0.9116 | 0.9116 | | 0.001 | 476.19 | 10000 | 0.7701 | 0.9116 | 0.9116 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_2-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_2-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:46:41+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_mouse\_2-seqsight\_8192\_512\_30M-L32\_f ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset. It achieves the following results on the evaluation set: * Loss: 0.6645 * F1 Score: 0.8811 * Accuracy: 0.8811 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_splice_reconstructed-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset. It achieves the following results on the evaluation set: - Loss: 0.3804 - F1 Score: 0.8475 - Accuracy: 0.8468 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.9702 | 0.7 | 200 | 0.9299 | 0.4471 | 0.5631 | | 0.9012 | 1.4 | 400 | 0.8756 | 0.5528 | 0.5800 | | 0.7187 | 2.1 | 600 | 0.5714 | 0.7511 | 0.7501 | | 0.5478 | 2.8 | 800 | 0.5069 | 0.7840 | 0.7830 | | 0.512 | 3.5 | 1000 | 0.4850 | 0.7925 | 0.7920 | | 0.498 | 4.2 | 1200 | 0.4768 | 0.8011 | 0.7996 | | 0.48 | 4.9 | 1400 | 0.4678 | 0.8050 | 0.8047 | | 0.4727 | 5.59 | 1600 | 0.4686 | 0.8129 | 0.8128 | | 0.4602 | 6.29 | 1800 | 0.4730 | 0.8044 | 0.8034 | | 0.4549 | 6.99 | 2000 | 0.4491 | 0.8166 | 0.8157 | | 0.4493 | 7.69 | 2200 | 0.4262 | 0.8261 | 0.8260 | | 0.4376 | 8.39 | 2400 | 0.4393 | 0.8219 | 0.8214 | | 0.4409 | 9.09 | 2600 | 0.4433 | 0.8189 | 0.8178 | | 0.4333 | 9.79 | 2800 | 0.4359 | 0.8216 | 0.8209 | | 0.4323 | 10.49 | 3000 | 0.4403 | 0.8205 | 0.8198 | | 0.423 | 11.19 | 3200 | 0.4466 | 0.8205 | 0.8196 | | 0.4264 | 11.89 | 3400 | 0.4211 | 0.8289 | 0.8281 | | 0.4118 | 12.59 | 3600 | 0.4301 | 0.8290 | 0.8284 | | 0.4198 | 13.29 | 3800 | 0.4175 | 0.8324 | 0.8317 | | 0.4129 | 13.99 | 4000 | 0.4398 | 0.8220 | 0.8211 | | 0.4038 | 14.69 | 4200 | 0.4330 | 0.8253 | 0.8244 | | 0.4148 | 15.38 | 4400 | 0.4241 | 0.8303 | 0.8295 | | 0.408 | 16.08 | 4600 | 0.4587 | 0.8120 | 0.8113 | | 0.4066 | 16.78 | 4800 | 0.4184 | 0.8332 | 0.8323 | | 0.4002 | 17.48 | 5000 | 0.4429 | 0.8217 | 0.8207 | | 0.4029 | 18.18 | 5200 | 0.4022 | 0.8409 | 0.8402 | | 0.397 | 18.88 | 5400 | 0.4166 | 0.8345 | 0.8336 | | 0.3951 | 19.58 | 5600 | 0.4143 | 0.8376 | 0.8369 | | 0.4009 | 20.28 | 5800 | 0.4117 | 0.8409 | 0.8402 | | 0.3921 | 20.98 | 6000 | 0.4044 | 0.8399 | 0.8393 | | 0.3956 | 21.68 | 6200 | 0.4258 | 0.8297 | 0.8290 | | 0.3906 | 22.38 | 6400 | 0.4151 | 0.8355 | 0.8347 | | 0.3888 | 23.08 | 6600 | 0.4197 | 0.8327 | 0.8319 | | 0.3895 | 23.78 | 6800 | 0.4057 | 0.8399 | 0.8391 | | 0.3905 | 24.48 | 7000 | 0.4212 | 0.8296 | 0.8288 | | 0.3894 | 25.17 | 7200 | 0.4062 | 0.8378 | 0.8369 | | 0.3879 | 25.87 | 7400 | 0.4158 | 0.8340 | 0.8332 | | 0.3817 | 26.57 | 7600 | 0.4236 | 0.8303 | 0.8295 | | 0.3803 | 27.27 | 7800 | 0.4165 | 0.8346 | 0.8338 | | 0.382 | 27.97 | 8000 | 0.4152 | 0.8351 | 0.8343 | | 0.3845 | 28.67 | 8200 | 0.4170 | 0.8359 | 0.8352 | | 0.3806 | 29.37 | 8400 | 0.4144 | 0.8356 | 0.8347 | | 0.3754 | 30.07 | 8600 | 0.4066 | 0.8403 | 0.8395 | | 0.3795 | 30.77 | 8800 | 0.4171 | 0.8325 | 0.8317 | | 0.3741 | 31.47 | 9000 | 0.4140 | 0.8368 | 0.8360 | | 0.3847 | 32.17 | 9200 | 0.4102 | 0.8367 | 0.8358 | | 0.3739 | 32.87 | 9400 | 0.4150 | 0.8368 | 0.8360 | | 0.3794 | 33.57 | 9600 | 0.4174 | 0.8342 | 0.8334 | | 0.3826 | 34.27 | 9800 | 0.4145 | 0.8355 | 0.8347 | | 0.374 | 34.97 | 10000 | 0.4148 | 0.8353 | 0.8345 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:46:51+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_splice\_reconstructed-seqsight\_8192\_512\_30M-L1\_f ========================================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset. It achieves the following results on the evaluation set: * Loss: 0.3804 * F1 Score: 0.8475 * Accuracy: 0.8468 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
null
# Cran-May/openbuddy-llama3-8b-v21.1-8k-Q4_K_S-GGUF This model was converted to GGUF format from [`OpenBuddy/openbuddy-llama3-8b-v21.1-8k`](https://huggingface.co/OpenBuddy/openbuddy-llama3-8b-v21.1-8k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/OpenBuddy/openbuddy-llama3-8b-v21.1-8k) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo Cran-May/openbuddy-llama3-8b-v21.1-8k-Q4_K_S-GGUF --model openbuddy-llama3-8b-v21.1-8k.Q4_K_S.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo Cran-May/openbuddy-llama3-8b-v21.1-8k-Q4_K_S-GGUF --model openbuddy-llama3-8b-v21.1-8k.Q4_K_S.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m openbuddy-llama3-8b-v21.1-8k.Q4_K_S.gguf -n 128 ```
{"language": ["zh", "en", "fr", "de", "ja", "ko", "it", "fi"], "license": "other", "tags": ["llama-3", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "https://llama.meta.com/llama3/license/"}
Cran-May/openbuddy-llama3-8b-v21.1-8k-Q4_K_S-GGUF
null
[ "gguf", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "zh", "en", "fr", "de", "ja", "ko", "it", "fi", "license:other", "region:us" ]
null
2024-04-27T05:48:07+00:00
[]
[ "zh", "en", "fr", "de", "ja", "ko", "it", "fi" ]
TAGS #gguf #llama-3 #llama-cpp #gguf-my-repo #text-generation #zh #en #fr #de #ja #ko #it #fi #license-other #region-us
# Cran-May/openbuddy-llama3-8b-v21.1-8k-Q4_K_S-GGUF This model was converted to GGUF format from 'OpenBuddy/openbuddy-llama3-8b-v21.1-8k' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# Cran-May/openbuddy-llama3-8b-v21.1-8k-Q4_K_S-GGUF\nThis model was converted to GGUF format from 'OpenBuddy/openbuddy-llama3-8b-v21.1-8k' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-3 #llama-cpp #gguf-my-repo #text-generation #zh #en #fr #de #ja #ko #it #fi #license-other #region-us \n", "# Cran-May/openbuddy-llama3-8b-v21.1-8k-Q4_K_S-GGUF\nThis model was converted to GGUF format from 'OpenBuddy/openbuddy-llama3-8b-v21.1-8k' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_splice_reconstructed-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset. It achieves the following results on the evaluation set: - Loss: 0.3130 - F1 Score: 0.8775 - Accuracy: 0.8770 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.9542 | 0.7 | 200 | 0.8949 | 0.5097 | 0.5684 | | 0.7554 | 1.4 | 400 | 0.5420 | 0.7595 | 0.7589 | | 0.51 | 2.1 | 600 | 0.4631 | 0.8078 | 0.8075 | | 0.4611 | 2.8 | 800 | 0.4640 | 0.8088 | 0.8080 | | 0.4456 | 3.5 | 1000 | 0.4273 | 0.8321 | 0.8317 | | 0.4294 | 4.2 | 1200 | 0.4145 | 0.8327 | 0.8317 | | 0.4127 | 4.9 | 1400 | 0.4068 | 0.8360 | 0.8354 | | 0.4057 | 5.59 | 1600 | 0.4357 | 0.8271 | 0.8273 | | 0.3912 | 6.29 | 1800 | 0.4216 | 0.8320 | 0.8310 | | 0.381 | 6.99 | 2000 | 0.3908 | 0.8486 | 0.8477 | | 0.3749 | 7.69 | 2200 | 0.3888 | 0.8480 | 0.8472 | | 0.3634 | 8.39 | 2400 | 0.3829 | 0.8538 | 0.8534 | | 0.3617 | 9.09 | 2600 | 0.4030 | 0.8426 | 0.8413 | | 0.3542 | 9.79 | 2800 | 0.3773 | 0.8507 | 0.8498 | | 0.353 | 10.49 | 3000 | 0.3784 | 0.8501 | 0.8494 | | 0.3427 | 11.19 | 3200 | 0.4068 | 0.8419 | 0.8409 | | 0.3425 | 11.89 | 3400 | 0.3851 | 0.8471 | 0.8461 | | 0.33 | 12.59 | 3600 | 0.3885 | 0.8495 | 0.8488 | | 0.3362 | 13.29 | 3800 | 0.3658 | 0.8630 | 0.8621 | | 0.3251 | 13.99 | 4000 | 0.3974 | 0.8509 | 0.8496 | | 0.317 | 14.69 | 4200 | 0.4007 | 0.8402 | 0.8393 | | 0.3252 | 15.38 | 4400 | 0.3611 | 0.8643 | 0.8637 | | 0.3178 | 16.08 | 4600 | 0.3869 | 0.8531 | 0.8520 | | 0.3147 | 16.78 | 4800 | 0.3765 | 0.8585 | 0.8577 | | 0.3071 | 17.48 | 5000 | 0.3780 | 0.8581 | 0.8571 | | 0.3097 | 18.18 | 5200 | 0.3498 | 0.8665 | 0.8658 | | 0.3058 | 18.88 | 5400 | 0.3673 | 0.8622 | 0.8615 | | 0.3024 | 19.58 | 5600 | 0.3531 | 0.8693 | 0.8687 | | 0.3106 | 20.28 | 5800 | 0.3465 | 0.8713 | 0.8707 | | 0.2983 | 20.98 | 6000 | 0.3315 | 0.8744 | 0.8740 | | 0.2992 | 21.68 | 6200 | 0.3573 | 0.8650 | 0.8643 | | 0.2969 | 22.38 | 6400 | 0.3603 | 0.8659 | 0.8652 | | 0.2881 | 23.08 | 6600 | 0.3621 | 0.8651 | 0.8643 | | 0.2931 | 23.78 | 6800 | 0.3485 | 0.8670 | 0.8663 | | 0.2916 | 24.48 | 7000 | 0.3610 | 0.8631 | 0.8623 | | 0.2926 | 25.17 | 7200 | 0.3503 | 0.8664 | 0.8656 | | 0.2901 | 25.87 | 7400 | 0.3512 | 0.8666 | 0.8658 | | 0.2871 | 26.57 | 7600 | 0.3668 | 0.8577 | 0.8569 | | 0.2831 | 27.27 | 7800 | 0.3581 | 0.8663 | 0.8656 | | 0.2859 | 27.97 | 8000 | 0.3566 | 0.8670 | 0.8663 | | 0.2889 | 28.67 | 8200 | 0.3415 | 0.8713 | 0.8707 | | 0.2776 | 29.37 | 8400 | 0.3523 | 0.8673 | 0.8665 | | 0.2781 | 30.07 | 8600 | 0.3478 | 0.8698 | 0.8691 | | 0.2757 | 30.77 | 8800 | 0.3556 | 0.8669 | 0.8661 | | 0.2796 | 31.47 | 9000 | 0.3535 | 0.8675 | 0.8667 | | 0.2835 | 32.17 | 9200 | 0.3457 | 0.8722 | 0.8715 | | 0.2789 | 32.87 | 9400 | 0.3514 | 0.8693 | 0.8687 | | 0.2761 | 33.57 | 9600 | 0.3604 | 0.8644 | 0.8637 | | 0.2775 | 34.27 | 9800 | 0.3541 | 0.8670 | 0.8663 | | 0.2737 | 34.97 | 10000 | 0.3539 | 0.8681 | 0.8674 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:52:18+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_splice\_reconstructed-seqsight\_8192\_512\_30M-L8\_f ========================================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset. It achieves the following results on the evaluation set: * Loss: 0.3130 * F1 Score: 0.8775 * Accuracy: 0.8770 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Pretraining_MFM_v3 This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/deberta-base", "model-index": [{"name": "Pretraining_MFM_v3", "results": []}]}
JJ-Tae/Pretraining_MFM_v3
null
[ "transformers", "tensorboard", "safetensors", "deberta", "fill-mask", "generated_from_trainer", "base_model:microsoft/deberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T05:53:37+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #deberta #fill-mask #generated_from_trainer #base_model-microsoft/deberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Pretraining_MFM_v3 This model is a fine-tuned version of microsoft/deberta-base on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# Pretraining_MFM_v3\n\nThis model is a fine-tuned version of microsoft/deberta-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 50", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #deberta #fill-mask #generated_from_trainer #base_model-microsoft/deberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Pretraining_MFM_v3\n\nThis model is a fine-tuned version of microsoft/deberta-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 50", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
null
# CAI-Synthetic Model ## Overview The CAI-Synthetic Model is a large language model designed to understand and respond to complex questions. This model has been fine-tuned on a synthetic dataset from Mostly AI, allowing it to engage in a variety of contexts with reliable responses. It is designed to perform well in diverse scenarios. ## Base Model and Fine-Tuning - Base Model: Google/Gemma-7b - Fine-Tuning Adapter: LoRA Adapter - Synthetic Dataset: Mostly AI Synthetic Dataset ## Licensing and Usage The CAI-Synthetic Model is licensed under the terms of its base model, Gemma-7b, and the synthetic dataset's licensing agreements. Ensure compliance with any licensing restrictions when using or distributing this model. Attribution to the source of the fine-tuning adapter and the synthetic dataset is required. ## Prompt Configuration When using this model, you can employ the following prompt structure for interactions: '''' ### Instruction Describe a task that requires a response. ### Instruction: {instruction} ### Response: {response} '''' ## Usage Scenarios This model is suitable for various applications, including: ## Conversational AI: Building chatbots and virtual assistants that can respond in different contexts. ## Customer Support: Providing automated customer service responses. ## Knowledge-based Systems: Enhancing systems with contextualized responses based on synthetic data. ## Contact Information For more information about the CAI-Synthetic Model, licensing, or other inquiries, contact [Inner I Network](https://innerinetcompany.com/about/).
{"license": "gemma", "datasets": ["InnerI/CAI-synthetic-10k"]}
InnerI/CAI-synthetic
null
[ "safetensors", "dataset:InnerI/CAI-synthetic-10k", "license:gemma", "region:us" ]
null
2024-04-27T05:54:01+00:00
[]
[]
TAGS #safetensors #dataset-InnerI/CAI-synthetic-10k #license-gemma #region-us
# CAI-Synthetic Model ## Overview The CAI-Synthetic Model is a large language model designed to understand and respond to complex questions. This model has been fine-tuned on a synthetic dataset from Mostly AI, allowing it to engage in a variety of contexts with reliable responses. It is designed to perform well in diverse scenarios. ## Base Model and Fine-Tuning - Base Model: Google/Gemma-7b - Fine-Tuning Adapter: LoRA Adapter - Synthetic Dataset: Mostly AI Synthetic Dataset ## Licensing and Usage The CAI-Synthetic Model is licensed under the terms of its base model, Gemma-7b, and the synthetic dataset's licensing agreements. Ensure compliance with any licensing restrictions when using or distributing this model. Attribution to the source of the fine-tuning adapter and the synthetic dataset is required. ## Prompt Configuration When using this model, you can employ the following prompt structure for interactions: '''' ### Instruction Describe a task that requires a response. ### Instruction: {instruction} ### Response: {response} '''' ## Usage Scenarios This model is suitable for various applications, including: ## Conversational AI: Building chatbots and virtual assistants that can respond in different contexts. ## Customer Support: Providing automated customer service responses. ## Knowledge-based Systems: Enhancing systems with contextualized responses based on synthetic data. ## Contact Information For more information about the CAI-Synthetic Model, licensing, or other inquiries, contact Inner I Network.
[ "# CAI-Synthetic Model", "## Overview\nThe CAI-Synthetic Model is a large language model designed to understand and respond to complex questions. This model has been fine-tuned on a synthetic dataset from Mostly AI, allowing it to engage in a variety of contexts with reliable responses. It is designed to perform well in diverse scenarios.", "## Base Model and Fine-Tuning\n- Base Model: Google/Gemma-7b\n\n- Fine-Tuning Adapter: LoRA Adapter\n\n- Synthetic Dataset: Mostly AI Synthetic Dataset", "## Licensing and Usage\nThe CAI-Synthetic Model is licensed under the terms of its base model, Gemma-7b, and the synthetic dataset's licensing agreements. Ensure compliance with any licensing restrictions when using or distributing this model. Attribution to the source of the fine-tuning adapter and the synthetic dataset is required.", "## Prompt Configuration\nWhen using this model, you can employ the following prompt structure for interactions:\n \n ''''\n \n ### Instruction Describe a task that requires a response.\n \n ### Instruction: {instruction}\n \n ### Response: {response}\n ''''", "## Usage Scenarios\nThis model is suitable for various applications, including:", "## Conversational AI: \nBuilding chatbots and virtual assistants that can respond in different contexts.", "## Customer Support: \nProviding automated customer service responses.", "## Knowledge-based Systems: \nEnhancing systems with contextualized responses based on synthetic data.", "## Contact Information\nFor more information about the CAI-Synthetic Model, licensing, or other inquiries, contact Inner I Network." ]
[ "TAGS\n#safetensors #dataset-InnerI/CAI-synthetic-10k #license-gemma #region-us \n", "# CAI-Synthetic Model", "## Overview\nThe CAI-Synthetic Model is a large language model designed to understand and respond to complex questions. This model has been fine-tuned on a synthetic dataset from Mostly AI, allowing it to engage in a variety of contexts with reliable responses. It is designed to perform well in diverse scenarios.", "## Base Model and Fine-Tuning\n- Base Model: Google/Gemma-7b\n\n- Fine-Tuning Adapter: LoRA Adapter\n\n- Synthetic Dataset: Mostly AI Synthetic Dataset", "## Licensing and Usage\nThe CAI-Synthetic Model is licensed under the terms of its base model, Gemma-7b, and the synthetic dataset's licensing agreements. Ensure compliance with any licensing restrictions when using or distributing this model. Attribution to the source of the fine-tuning adapter and the synthetic dataset is required.", "## Prompt Configuration\nWhen using this model, you can employ the following prompt structure for interactions:\n \n ''''\n \n ### Instruction Describe a task that requires a response.\n \n ### Instruction: {instruction}\n \n ### Response: {response}\n ''''", "## Usage Scenarios\nThis model is suitable for various applications, including:", "## Conversational AI: \nBuilding chatbots and virtual assistants that can respond in different contexts.", "## Customer Support: \nProviding automated customer service responses.", "## Knowledge-based Systems: \nEnhancing systems with contextualized responses based on synthetic data.", "## Contact Information\nFor more information about the CAI-Synthetic Model, licensing, or other inquiries, contact Inner I Network." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_splice_reconstructed-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset. It achieves the following results on the evaluation set: - Loss: 0.2938 - F1 Score: 0.8963 - Accuracy: 0.8959 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.9466 | 0.7 | 200 | 0.8643 | 0.5148 | 0.5831 | | 0.5956 | 1.4 | 400 | 0.4555 | 0.8066 | 0.8056 | | 0.4577 | 2.1 | 600 | 0.4274 | 0.8259 | 0.8251 | | 0.4153 | 2.8 | 800 | 0.4152 | 0.8299 | 0.8290 | | 0.3897 | 3.5 | 1000 | 0.3685 | 0.8559 | 0.8555 | | 0.3746 | 4.2 | 1200 | 0.3909 | 0.8437 | 0.8424 | | 0.358 | 4.9 | 1400 | 0.3652 | 0.8594 | 0.8584 | | 0.3457 | 5.59 | 1600 | 0.3913 | 0.8513 | 0.8509 | | 0.3354 | 6.29 | 1800 | 0.4242 | 0.8295 | 0.8284 | | 0.3228 | 6.99 | 2000 | 0.3479 | 0.8695 | 0.8687 | | 0.3119 | 7.69 | 2200 | 0.3577 | 0.8613 | 0.8604 | | 0.3025 | 8.39 | 2400 | 0.3457 | 0.8699 | 0.8694 | | 0.3012 | 9.09 | 2600 | 0.3635 | 0.8613 | 0.8599 | | 0.288 | 9.79 | 2800 | 0.3310 | 0.8762 | 0.8755 | | 0.2873 | 10.49 | 3000 | 0.3297 | 0.8811 | 0.8805 | | 0.2744 | 11.19 | 3200 | 0.3476 | 0.8710 | 0.8702 | | 0.2757 | 11.89 | 3400 | 0.3811 | 0.8562 | 0.8551 | | 0.2588 | 12.59 | 3600 | 0.3474 | 0.8696 | 0.8689 | | 0.2623 | 13.29 | 3800 | 0.3304 | 0.8825 | 0.8816 | | 0.2531 | 13.99 | 4000 | 0.3333 | 0.8779 | 0.8770 | | 0.2449 | 14.69 | 4200 | 0.3418 | 0.8759 | 0.8751 | | 0.2511 | 15.38 | 4400 | 0.3267 | 0.8831 | 0.8825 | | 0.2379 | 16.08 | 4600 | 0.3480 | 0.8743 | 0.8735 | | 0.2355 | 16.78 | 4800 | 0.3266 | 0.8795 | 0.8788 | | 0.2293 | 17.48 | 5000 | 0.3219 | 0.8859 | 0.8851 | | 0.2314 | 18.18 | 5200 | 0.3096 | 0.8926 | 0.8922 | | 0.225 | 18.88 | 5400 | 0.3123 | 0.8881 | 0.8875 | | 0.2203 | 19.58 | 5600 | 0.3278 | 0.8833 | 0.8827 | | 0.2245 | 20.28 | 5800 | 0.2965 | 0.8963 | 0.8959 | | 0.2128 | 20.98 | 6000 | 0.2976 | 0.8982 | 0.8979 | | 0.2138 | 21.68 | 6200 | 0.2932 | 0.8977 | 0.8974 | | 0.2074 | 22.38 | 6400 | 0.3216 | 0.8902 | 0.8895 | | 0.2046 | 23.08 | 6600 | 0.3221 | 0.8897 | 0.8891 | | 0.2065 | 23.78 | 6800 | 0.3026 | 0.8968 | 0.8963 | | 0.2015 | 24.48 | 7000 | 0.3030 | 0.8983 | 0.8979 | | 0.2007 | 25.17 | 7200 | 0.3208 | 0.8877 | 0.8871 | | 0.1996 | 25.87 | 7400 | 0.3060 | 0.8949 | 0.8943 | | 0.1945 | 26.57 | 7600 | 0.3219 | 0.8891 | 0.8884 | | 0.1929 | 27.27 | 7800 | 0.3086 | 0.8948 | 0.8943 | | 0.1935 | 27.97 | 8000 | 0.3144 | 0.8948 | 0.8943 | | 0.1936 | 28.67 | 8200 | 0.3078 | 0.8966 | 0.8961 | | 0.1836 | 29.37 | 8400 | 0.3153 | 0.8927 | 0.8922 | | 0.1815 | 30.07 | 8600 | 0.3117 | 0.8970 | 0.8965 | | 0.183 | 30.77 | 8800 | 0.3181 | 0.8949 | 0.8943 | | 0.1865 | 31.47 | 9000 | 0.3161 | 0.8960 | 0.8954 | | 0.1859 | 32.17 | 9200 | 0.3103 | 0.8981 | 0.8976 | | 0.1802 | 32.87 | 9400 | 0.3170 | 0.8957 | 0.8952 | | 0.1806 | 33.57 | 9600 | 0.3252 | 0.8925 | 0.8919 | | 0.1803 | 34.27 | 9800 | 0.3181 | 0.8957 | 0.8952 | | 0.1787 | 34.97 | 10000 | 0.3147 | 0.8968 | 0.8963 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:54:39+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_splice\_reconstructed-seqsight\_8192\_512\_30M-L32\_f ========================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset. It achieves the following results on the evaluation set: * Loss: 0.2938 * F1 Score: 0.8963 * Accuracy: 0.8959 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_0-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.3607 - F1 Score: 0.8409 - Accuracy: 0.841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5635 | 0.79 | 200 | 0.4994 | 0.7418 | 0.742 | | 0.4892 | 1.58 | 400 | 0.4840 | 0.7565 | 0.757 | | 0.4811 | 2.37 | 600 | 0.4766 | 0.7625 | 0.763 | | 0.4704 | 3.16 | 800 | 0.4816 | 0.7579 | 0.758 | | 0.4635 | 3.95 | 1000 | 0.4639 | 0.7697 | 0.77 | | 0.4626 | 4.74 | 1200 | 0.4702 | 0.7750 | 0.775 | | 0.4589 | 5.53 | 1400 | 0.4735 | 0.7728 | 0.773 | | 0.4547 | 6.32 | 1600 | 0.4753 | 0.7583 | 0.759 | | 0.4563 | 7.11 | 1800 | 0.4752 | 0.7665 | 0.767 | | 0.457 | 7.91 | 2000 | 0.4700 | 0.7717 | 0.772 | | 0.4517 | 8.7 | 2200 | 0.4640 | 0.7719 | 0.772 | | 0.4519 | 9.49 | 2400 | 0.4543 | 0.7920 | 0.792 | | 0.4498 | 10.28 | 2600 | 0.4856 | 0.7534 | 0.755 | | 0.4463 | 11.07 | 2800 | 0.4689 | 0.7715 | 0.772 | | 0.4448 | 11.86 | 3000 | 0.4686 | 0.7726 | 0.773 | | 0.447 | 12.65 | 3200 | 0.4704 | 0.7653 | 0.766 | | 0.4433 | 13.44 | 3400 | 0.4580 | 0.7831 | 0.783 | | 0.4428 | 14.23 | 3600 | 0.4570 | 0.7821 | 0.782 | | 0.4448 | 15.02 | 3800 | 0.4687 | 0.7777 | 0.778 | | 0.445 | 15.81 | 4000 | 0.4620 | 0.7736 | 0.774 | | 0.4408 | 16.6 | 4200 | 0.4574 | 0.7890 | 0.789 | | 0.4412 | 17.39 | 4400 | 0.4755 | 0.7693 | 0.77 | | 0.4398 | 18.18 | 4600 | 0.4620 | 0.7810 | 0.781 | | 0.4374 | 18.97 | 4800 | 0.4671 | 0.7715 | 0.772 | | 0.4416 | 19.76 | 5000 | 0.4561 | 0.7900 | 0.79 | | 0.4368 | 20.55 | 5200 | 0.4514 | 0.7950 | 0.795 | | 0.4365 | 21.34 | 5400 | 0.4618 | 0.7778 | 0.778 | | 0.4352 | 22.13 | 5600 | 0.4628 | 0.7849 | 0.785 | | 0.4399 | 22.92 | 5800 | 0.4552 | 0.7911 | 0.791 | | 0.4322 | 23.72 | 6000 | 0.4633 | 0.7849 | 0.785 | | 0.4361 | 24.51 | 6200 | 0.4529 | 0.7901 | 0.79 | | 0.4389 | 25.3 | 6400 | 0.4563 | 0.7900 | 0.79 | | 0.4339 | 26.09 | 6600 | 0.4562 | 0.7900 | 0.79 | | 0.4333 | 26.88 | 6800 | 0.4605 | 0.7899 | 0.79 | | 0.4344 | 27.67 | 7000 | 0.4522 | 0.7920 | 0.792 | | 0.4323 | 28.46 | 7200 | 0.4511 | 0.7900 | 0.79 | | 0.4334 | 29.25 | 7400 | 0.4550 | 0.7921 | 0.792 | | 0.4367 | 30.04 | 7600 | 0.4547 | 0.7931 | 0.793 | | 0.4336 | 30.83 | 7800 | 0.4574 | 0.7890 | 0.789 | | 0.4332 | 31.62 | 8000 | 0.4493 | 0.7910 | 0.791 | | 0.4336 | 32.41 | 8200 | 0.4571 | 0.7880 | 0.788 | | 0.4285 | 33.2 | 8400 | 0.4565 | 0.7860 | 0.786 | | 0.4357 | 33.99 | 8600 | 0.4540 | 0.7951 | 0.795 | | 0.4337 | 34.78 | 8800 | 0.4518 | 0.7901 | 0.79 | | 0.4274 | 35.57 | 9000 | 0.4544 | 0.7921 | 0.792 | | 0.43 | 36.36 | 9200 | 0.4592 | 0.7910 | 0.791 | | 0.4333 | 37.15 | 9400 | 0.4599 | 0.7879 | 0.788 | | 0.4312 | 37.94 | 9600 | 0.4565 | 0.7940 | 0.794 | | 0.4336 | 38.74 | 9800 | 0.4573 | 0.7930 | 0.793 | | 0.4316 | 39.53 | 10000 | 0.4571 | 0.7940 | 0.794 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_0-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_0-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:55:03+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_tf\_0-seqsight\_8192\_512\_30M-L1\_f ========================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset. It achieves the following results on the evaluation set: * Loss: 0.3607 * F1 Score: 0.8409 * Accuracy: 0.841 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_0-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.3528 - F1 Score: 0.8428 - Accuracy: 0.843 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5386 | 0.79 | 200 | 0.4856 | 0.7561 | 0.757 | | 0.4747 | 1.58 | 400 | 0.4722 | 0.7628 | 0.763 | | 0.4664 | 2.37 | 600 | 0.4645 | 0.7786 | 0.779 | | 0.4576 | 3.16 | 800 | 0.4698 | 0.7679 | 0.768 | | 0.4508 | 3.95 | 1000 | 0.4534 | 0.7846 | 0.785 | | 0.4487 | 4.74 | 1200 | 0.4598 | 0.7800 | 0.78 | | 0.4443 | 5.53 | 1400 | 0.4762 | 0.7722 | 0.773 | | 0.4404 | 6.32 | 1600 | 0.4650 | 0.7797 | 0.78 | | 0.4415 | 7.11 | 1800 | 0.4669 | 0.7714 | 0.772 | | 0.4397 | 7.91 | 2000 | 0.4687 | 0.7754 | 0.776 | | 0.4345 | 8.7 | 2200 | 0.4578 | 0.792 | 0.792 | | 0.4336 | 9.49 | 2400 | 0.4502 | 0.7850 | 0.785 | | 0.432 | 10.28 | 2600 | 0.4730 | 0.7679 | 0.769 | | 0.4287 | 11.07 | 2800 | 0.4664 | 0.7701 | 0.771 | | 0.4263 | 11.86 | 3000 | 0.4631 | 0.7797 | 0.78 | | 0.4252 | 12.65 | 3200 | 0.4613 | 0.7767 | 0.777 | | 0.4226 | 13.44 | 3400 | 0.4561 | 0.7930 | 0.793 | | 0.4222 | 14.23 | 3600 | 0.4577 | 0.7860 | 0.786 | | 0.422 | 15.02 | 3800 | 0.4680 | 0.7807 | 0.781 | | 0.4211 | 15.81 | 4000 | 0.4573 | 0.7815 | 0.782 | | 0.4155 | 16.6 | 4200 | 0.4588 | 0.7861 | 0.786 | | 0.4175 | 17.39 | 4400 | 0.4747 | 0.7709 | 0.772 | | 0.4147 | 18.18 | 4600 | 0.4597 | 0.7820 | 0.782 | | 0.4111 | 18.97 | 4800 | 0.4718 | 0.7702 | 0.771 | | 0.4146 | 19.76 | 5000 | 0.4620 | 0.7798 | 0.78 | | 0.4133 | 20.55 | 5200 | 0.4548 | 0.7851 | 0.785 | | 0.4074 | 21.34 | 5400 | 0.4699 | 0.7678 | 0.769 | | 0.4074 | 22.13 | 5600 | 0.4736 | 0.7747 | 0.775 | | 0.411 | 22.92 | 5800 | 0.4597 | 0.7799 | 0.78 | | 0.4029 | 23.72 | 6000 | 0.4688 | 0.7748 | 0.775 | | 0.4073 | 24.51 | 6200 | 0.4631 | 0.7869 | 0.787 | | 0.4092 | 25.3 | 6400 | 0.4622 | 0.7830 | 0.783 | | 0.4031 | 26.09 | 6600 | 0.4634 | 0.7859 | 0.786 | | 0.402 | 26.88 | 6800 | 0.4682 | 0.7858 | 0.786 | | 0.402 | 27.67 | 7000 | 0.4595 | 0.7851 | 0.785 | | 0.4007 | 28.46 | 7200 | 0.4630 | 0.7871 | 0.787 | | 0.4028 | 29.25 | 7400 | 0.4655 | 0.7789 | 0.779 | | 0.4023 | 30.04 | 7600 | 0.4693 | 0.7819 | 0.782 | | 0.4009 | 30.83 | 7800 | 0.4683 | 0.7859 | 0.786 | | 0.4018 | 31.62 | 8000 | 0.4613 | 0.7881 | 0.788 | | 0.4021 | 32.41 | 8200 | 0.4691 | 0.7799 | 0.78 | | 0.3937 | 33.2 | 8400 | 0.4662 | 0.7859 | 0.786 | | 0.4001 | 33.99 | 8600 | 0.4675 | 0.7860 | 0.786 | | 0.3996 | 34.78 | 8800 | 0.4635 | 0.7870 | 0.787 | | 0.3931 | 35.57 | 9000 | 0.4651 | 0.7840 | 0.784 | | 0.3965 | 36.36 | 9200 | 0.4731 | 0.7819 | 0.782 | | 0.3971 | 37.15 | 9400 | 0.4751 | 0.7738 | 0.774 | | 0.3951 | 37.94 | 9600 | 0.4701 | 0.7820 | 0.782 | | 0.4001 | 38.74 | 9800 | 0.4709 | 0.7779 | 0.778 | | 0.3961 | 39.53 | 10000 | 0.4705 | 0.7819 | 0.782 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_0-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_0-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:55:03+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_tf\_0-seqsight\_8192\_512\_30M-L8\_f ========================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset. It achieves the following results on the evaluation set: * Loss: 0.3528 * F1 Score: 0.8428 * Accuracy: 0.843 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_0-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.3627 - F1 Score: 0.8366 - Accuracy: 0.837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5192 | 0.79 | 200 | 0.4765 | 0.7689 | 0.769 | | 0.467 | 1.58 | 400 | 0.4630 | 0.7830 | 0.783 | | 0.4572 | 2.37 | 600 | 0.4575 | 0.7893 | 0.79 | | 0.4488 | 3.16 | 800 | 0.4623 | 0.7800 | 0.78 | | 0.4411 | 3.95 | 1000 | 0.4550 | 0.7787 | 0.779 | | 0.4383 | 4.74 | 1200 | 0.4547 | 0.7910 | 0.791 | | 0.4314 | 5.53 | 1400 | 0.4777 | 0.7703 | 0.771 | | 0.4266 | 6.32 | 1600 | 0.4651 | 0.7869 | 0.787 | | 0.4256 | 7.11 | 1800 | 0.4684 | 0.7716 | 0.772 | | 0.423 | 7.91 | 2000 | 0.4630 | 0.7737 | 0.774 | | 0.4161 | 8.7 | 2200 | 0.4715 | 0.7729 | 0.773 | | 0.4123 | 9.49 | 2400 | 0.4632 | 0.7810 | 0.781 | | 0.4114 | 10.28 | 2600 | 0.4778 | 0.7755 | 0.776 | | 0.4068 | 11.07 | 2800 | 0.4784 | 0.7678 | 0.768 | | 0.4019 | 11.86 | 3000 | 0.4931 | 0.7768 | 0.777 | | 0.3986 | 12.65 | 3200 | 0.4738 | 0.7800 | 0.78 | | 0.394 | 13.44 | 3400 | 0.4854 | 0.7831 | 0.783 | | 0.3927 | 14.23 | 3600 | 0.4796 | 0.7750 | 0.775 | | 0.392 | 15.02 | 3800 | 0.4955 | 0.7735 | 0.774 | | 0.3875 | 15.81 | 4000 | 0.4666 | 0.7750 | 0.775 | | 0.3823 | 16.6 | 4200 | 0.4937 | 0.7691 | 0.769 | | 0.3833 | 17.39 | 4400 | 0.4885 | 0.7605 | 0.761 | | 0.3799 | 18.18 | 4600 | 0.4851 | 0.7731 | 0.773 | | 0.3747 | 18.97 | 4800 | 0.4933 | 0.7674 | 0.768 | | 0.3769 | 19.76 | 5000 | 0.4682 | 0.7771 | 0.777 | | 0.3734 | 20.55 | 5200 | 0.4840 | 0.7700 | 0.77 | | 0.3646 | 21.34 | 5400 | 0.4968 | 0.7603 | 0.761 | | 0.3601 | 22.13 | 5600 | 0.5059 | 0.7688 | 0.769 | | 0.3671 | 22.92 | 5800 | 0.4913 | 0.7700 | 0.77 | | 0.3548 | 23.72 | 6000 | 0.4869 | 0.7840 | 0.784 | | 0.3578 | 24.51 | 6200 | 0.4793 | 0.7769 | 0.777 | | 0.3618 | 25.3 | 6400 | 0.4879 | 0.7729 | 0.773 | | 0.3515 | 26.09 | 6600 | 0.4902 | 0.7791 | 0.779 | | 0.3503 | 26.88 | 6800 | 0.4937 | 0.7790 | 0.779 | | 0.3485 | 27.67 | 7000 | 0.4882 | 0.7821 | 0.782 | | 0.3447 | 28.46 | 7200 | 0.5060 | 0.7841 | 0.784 | | 0.3469 | 29.25 | 7400 | 0.5030 | 0.7760 | 0.776 | | 0.346 | 30.04 | 7600 | 0.5076 | 0.7739 | 0.774 | | 0.3403 | 30.83 | 7800 | 0.5044 | 0.7770 | 0.777 | | 0.3414 | 31.62 | 8000 | 0.5016 | 0.7890 | 0.789 | | 0.3419 | 32.41 | 8200 | 0.5121 | 0.7749 | 0.775 | | 0.334 | 33.2 | 8400 | 0.5049 | 0.7770 | 0.777 | | 0.3389 | 33.99 | 8600 | 0.5084 | 0.7780 | 0.778 | | 0.3376 | 34.78 | 8800 | 0.4986 | 0.7871 | 0.787 | | 0.3305 | 35.57 | 9000 | 0.5059 | 0.7831 | 0.783 | | 0.3336 | 36.36 | 9200 | 0.5192 | 0.7709 | 0.771 | | 0.3339 | 37.15 | 9400 | 0.5232 | 0.7748 | 0.775 | | 0.33 | 37.94 | 9600 | 0.5195 | 0.7729 | 0.773 | | 0.3343 | 38.74 | 9800 | 0.5196 | 0.7770 | 0.777 | | 0.3301 | 39.53 | 10000 | 0.5200 | 0.7750 | 0.775 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_0-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_0-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:55:35+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_tf\_0-seqsight\_8192\_512\_30M-L32\_f ========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset. It achieves the following results on the evaluation set: * Loss: 0.3627 * F1 Score: 0.8366 * Accuracy: 0.837 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
ai-human-lab/EEVE-Korean-10.8B-enko-translate-v0.1
null
[ "transformers", "safetensors", "llama", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T05:56:01+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #feature-extraction #arxiv-1910.09700 #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #feature-extraction #arxiv-1910.09700 #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_1-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.3506 - F1 Score: 0.8549 - Accuracy: 0.855 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.581 | 0.83 | 200 | 0.5489 | 0.7280 | 0.728 | | 0.512 | 1.67 | 400 | 0.5263 | 0.7469 | 0.747 | | 0.4971 | 2.5 | 600 | 0.5195 | 0.7469 | 0.747 | | 0.4874 | 3.33 | 800 | 0.5201 | 0.7393 | 0.74 | | 0.4892 | 4.17 | 1000 | 0.5116 | 0.7458 | 0.746 | | 0.4787 | 5.0 | 1200 | 0.5165 | 0.7476 | 0.748 | | 0.4779 | 5.83 | 1400 | 0.5126 | 0.7477 | 0.748 | | 0.4774 | 6.67 | 1600 | 0.5136 | 0.7475 | 0.748 | | 0.4745 | 7.5 | 1800 | 0.5076 | 0.7480 | 0.748 | | 0.4713 | 8.33 | 2000 | 0.5143 | 0.7500 | 0.751 | | 0.4717 | 9.17 | 2200 | 0.5100 | 0.7386 | 0.739 | | 0.4705 | 10.0 | 2400 | 0.5214 | 0.7446 | 0.746 | | 0.4697 | 10.83 | 2600 | 0.5145 | 0.7435 | 0.745 | | 0.469 | 11.67 | 2800 | 0.5212 | 0.7442 | 0.746 | | 0.4586 | 12.5 | 3000 | 0.5150 | 0.7424 | 0.744 | | 0.47 | 13.33 | 3200 | 0.5163 | 0.7432 | 0.745 | | 0.4622 | 14.17 | 3400 | 0.5057 | 0.7339 | 0.734 | | 0.4623 | 15.0 | 3600 | 0.5242 | 0.7416 | 0.744 | | 0.461 | 15.83 | 3800 | 0.5069 | 0.7333 | 0.734 | | 0.4661 | 16.67 | 4000 | 0.5195 | 0.7411 | 0.743 | | 0.4596 | 17.5 | 4200 | 0.5153 | 0.7424 | 0.744 | | 0.4562 | 18.33 | 4400 | 0.5202 | 0.7429 | 0.744 | | 0.4605 | 19.17 | 4600 | 0.5175 | 0.7424 | 0.744 | | 0.4605 | 20.0 | 4800 | 0.5091 | 0.7470 | 0.748 | | 0.4601 | 20.83 | 5000 | 0.5126 | 0.7422 | 0.743 | | 0.4548 | 21.67 | 5200 | 0.5120 | 0.7410 | 0.742 | | 0.4566 | 22.5 | 5400 | 0.5085 | 0.7386 | 0.739 | | 0.4576 | 23.33 | 5600 | 0.5144 | 0.7407 | 0.742 | | 0.4551 | 24.17 | 5800 | 0.5216 | 0.7393 | 0.741 | | 0.4569 | 25.0 | 6000 | 0.5070 | 0.7338 | 0.734 | | 0.4543 | 25.83 | 6200 | 0.5109 | 0.7381 | 0.739 | | 0.4517 | 26.67 | 6400 | 0.5067 | 0.7379 | 0.738 | | 0.4559 | 27.5 | 6600 | 0.5136 | 0.7412 | 0.742 | | 0.4542 | 28.33 | 6800 | 0.5107 | 0.7412 | 0.742 | | 0.454 | 29.17 | 7000 | 0.5107 | 0.7414 | 0.742 | | 0.4547 | 30.0 | 7200 | 0.5112 | 0.7429 | 0.744 | | 0.4558 | 30.83 | 7400 | 0.5196 | 0.7431 | 0.745 | | 0.4514 | 31.67 | 7600 | 0.5059 | 0.7376 | 0.738 | | 0.4546 | 32.5 | 7800 | 0.5075 | 0.7424 | 0.743 | | 0.4499 | 33.33 | 8000 | 0.5113 | 0.7391 | 0.74 | | 0.4561 | 34.17 | 8200 | 0.5075 | 0.7385 | 0.739 | | 0.4503 | 35.0 | 8400 | 0.5075 | 0.7396 | 0.74 | | 0.4551 | 35.83 | 8600 | 0.5081 | 0.7411 | 0.742 | | 0.4535 | 36.67 | 8800 | 0.5095 | 0.7403 | 0.741 | | 0.4489 | 37.5 | 9000 | 0.5168 | 0.7431 | 0.745 | | 0.4517 | 38.33 | 9200 | 0.5100 | 0.7403 | 0.741 | | 0.4498 | 39.17 | 9400 | 0.5097 | 0.7414 | 0.742 | | 0.4526 | 40.0 | 9600 | 0.5103 | 0.7420 | 0.743 | | 0.4508 | 40.83 | 9800 | 0.5082 | 0.7376 | 0.738 | | 0.4508 | 41.67 | 10000 | 0.5093 | 0.7412 | 0.742 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_1-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_1-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T05:56:38+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_tf\_1-seqsight\_8192\_512\_30M-L1\_f ========================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset. It achieves the following results on the evaluation set: * Loss: 0.3506 * F1 Score: 0.8549 * Accuracy: 0.855 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
transformers
# Uploaded model - **Developed by:** xsa-dev - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
xsa-dev/hugs_llama3_technique_ft_16bit
null
[ "transformers", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:00:12+00:00
[]
[ "en" ]
TAGS #transformers #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: xsa-dev - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: xsa-dev\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: xsa-dev\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
# DistilGPT2 DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of [GPT-2](https://huggingface.co/gpt2). ## Model Details - **Developed by:** Hugging Face - **Model type:** Transformer-based Language Model - **Language:** English - **License:** Apache 2.0 - **Model Description:** DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using [knowledge distillation](#knowledge-distillation) and was designed to be a faster, lighter version of GPT-2. - **Resources for more information:** See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including Distilled-GPT2), [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure, and this page for more about [GPT-2](https://openai.com/blog/better-language-models/). ## Uses, Limitations and Risks #### Limitations and Risks <details> <summary>Click to expand</summary> **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.** As the developers of GPT-2 (OpenAI) note in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md), “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). DistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context. The impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example: - [Silva, Tambwekar and Gombolay (2021)](https://aclanthology.org/2021.naacl-main.189.pdf) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models. - [Xu and Hu (2022)](https://arxiv.org/pdf/2201.08542.pdf) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias). - [Gupta et al. (2022)](https://arxiv.org/pdf/2203.12574.pdf) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2. ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='distilgpt2') >>> set_seed(48) >>> generator("The White man worked as a", max_length=20, num_return_sequences=3) [{'generated_text': "The White man worked as a salesman at a McDonald's restaurant called Kia at the time of the"}, {'generated_text': 'The White man worked as a contractor in the Army in the late 1990s. He became a "'}, {'generated_text': 'The White man worked as a police spokesman to the US Navy in the 1930s.'}] >>> set_seed(48) >>> generator("The Black man worked as a", max_length=20, num_return_sequences=3) [{'generated_text': 'The Black man worked as a shop assistant for an hour at Wal-Mart at Wal-Mart in'}, {'generated_text': 'The Black man worked as a waiter in the hotel when he was assaulted when he got out of a'}, {'generated_text': 'The Black man worked as a police spokesman four months ago...'}] ``` </details> #### Potential Uses Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model. The developers of GPT-2 state in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including: > - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)* > - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.* > - *Entertainment: Creation of games, chat bots, and amusing generations.* Using DistilGPT2, the Hugging Face team built the [Write With Transformers](https://transformer.huggingface.co/doc/distil-gpt2) web app, which allows users to play with the model to generate text directly from their browser. #### Out-of-scope Uses OpenAI states in the GPT-2 [model card](https://github.com/openai/gpt-2/blob/master/model_card.md): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. ### How to Get Started with the Model <details> <summary>Click to expand</summary> *Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.* Using DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='distilgpt2') >>> set_seed(42) >>> generator("Hello, I’m a language model", max_length=20, num_return_sequences=5) Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. [{'generated_text': "Hello, I'm a language model, I'm a language model. In my previous post I've"}, {'generated_text': "Hello, I'm a language model, and I'd love to hear what you think about it."}, {'generated_text': "Hello, I'm a language model, but I don't get much of a connection anymore, so"}, {'generated_text': "Hello, I'm a language model, a functional language... It's not an example, and that"}, {'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I"}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2') model = GPT2Model.from_pretrained('distilgpt2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` And in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2') model = TFGPT2Model.from_pretrained('distilgpt2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` </details> ## Training Data DistilGPT2 was trained using [OpenWebTextCorpus](https://skylion007.github.io/OpenWebTextCorpus/), an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the [OpenWebTextCorpus Dataset Card](https://huggingface.co/datasets/openwebtext) for additional information about OpenWebTextCorpus and [Radford et al. (2019)](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) for additional information about WebText. ## Training Procedure The texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108). ## Evaluation Results The creators of DistilGPT2 [report](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) that, on the [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set). ## Environmental Impact *Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.* - **Hardware Type:** 8 16GB V100 - **Hours used:** 168 (1 week) - **Cloud Provider:** Azure - **Compute Region:** unavailable, assumed East US for calculations - **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2 ## Citation ```bibtex @inproceedings{sanh2019distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas}, booktitle={NeurIPS EMC^2 Workshop}, year={2019} } ``` ## Glossary - <a name="knowledge-distillation">**Knowledge Distillation**</a>: As described in [Sanh et al. (2019)](https://arxiv.org/pdf/1910.01108.pdf), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see [Bucila et al. (2006)](https://www.cs.cornell.edu/~caruana/compression.kdd06.pdf) and [Hinton et al. (2015)](https://arxiv.org/abs/1503.02531). <a href="https://huggingface.co/exbert/?model=distilgpt2"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
{"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["openwebtext"], "co2_eq_emissions": 149200, "model-index": [{"name": "distilgpt2", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "WikiText-103", "type": "wikitext"}, "metrics": [{"type": "perplexity", "value": 21.1, "name": "Perplexity"}]}]}]}
jiajiahong2134/DLhw2
null
[ "transformers", "pytorch", "tf", "jax", "tflite", "rust", "coreml", "safetensors", "gpt2", "text-generation", "exbert", "en", "dataset:openwebtext", "arxiv:1910.01108", "arxiv:2201.08542", "arxiv:2203.12574", "arxiv:1910.09700", "arxiv:1503.02531", "license:apache-2.0", "model-index", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:01:51+00:00
[ "1910.01108", "2201.08542", "2203.12574", "1910.09700", "1503.02531" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #tflite #rust #coreml #safetensors #gpt2 #text-generation #exbert #en #dataset-openwebtext #arxiv-1910.01108 #arxiv-2201.08542 #arxiv-2203.12574 #arxiv-1910.09700 #arxiv-1503.02531 #license-apache-2.0 #model-index #co2_eq_emissions #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# DistilGPT2 DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of GPT-2. ## Model Details - Developed by: Hugging Face - Model type: Transformer-based Language Model - Language: English - License: Apache 2.0 - Model Description: DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using knowledge distillation and was designed to be a faster, lighter version of GPT-2. - Resources for more information: See this repository for more about Distil\* (a class of compressed models including Distilled-GPT2), Sanh et al. (2019) for more information about knowledge distillation and the training procedure, and this page for more about GPT-2. ## Uses, Limitations and Risks #### Limitations and Risks <details> <summary>Click to expand</summary> CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes. As the developers of GPT-2 (OpenAI) note in their model card, “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). DistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context. The impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example: - Silva, Tambwekar and Gombolay (2021) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models. - Xu and Hu (2022) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias). - Gupta et al. (2022) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2. </details> #### Potential Uses Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model. The developers of GPT-2 state in their model card that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including: > - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)* > - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.* > - *Entertainment: Creation of games, chat bots, and amusing generations.* Using DistilGPT2, the Hugging Face team built the Write With Transformers web app, which allows users to play with the model to generate text directly from their browser. #### Out-of-scope Uses OpenAI states in the GPT-2 model card: > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. ### How to Get Started with the Model <details> <summary>Click to expand</summary> *Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.* Using DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: Here is how to use this model to get the features of a given text in PyTorch: And in TensorFlow: </details> ## Training Data DistilGPT2 was trained using OpenWebTextCorpus, an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the OpenWebTextCorpus Dataset Card for additional information about OpenWebTextCorpus and Radford et al. (2019) for additional information about WebText. ## Training Procedure The texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in Sanh et al. (2019). ## Evaluation Results The creators of DistilGPT2 report that, on the WikiText-103 benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set). ## Environmental Impact *Carbon emissions were estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.* - Hardware Type: 8 16GB V100 - Hours used: 168 (1 week) - Cloud Provider: Azure - Compute Region: unavailable, assumed East US for calculations - Carbon Emitted *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2 ## Glossary - <a name="knowledge-distillation">Knowledge Distillation</a>: As described in Sanh et al. (2019), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see Bucila et al. (2006) and Hinton et al. (2015). <a href="URL <img width="300px" src="URL </a>
[ "# DistilGPT2\n\nDistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of GPT-2.", "## Model Details\n\n- Developed by: Hugging Face\n- Model type: Transformer-based Language Model\n- Language: English\n- License: Apache 2.0\n- Model Description: DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using knowledge distillation and was designed to be a faster, lighter version of GPT-2.\n- Resources for more information: See this repository for more about Distil\\* (a class of compressed models including Distilled-GPT2), Sanh et al. (2019) for more information about knowledge distillation and the training procedure, and this page for more about GPT-2.", "## Uses, Limitations and Risks", "#### Limitations and Risks\n\n<details>\n<summary>Click to expand</summary>\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nAs the developers of GPT-2 (OpenAI) note in their model card, “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). \n\nDistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.\n\nThe impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example: \n\n- Silva, Tambwekar and Gombolay (2021) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.\n- Xu and Hu (2022) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias). \n- Gupta et al. (2022) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2. \n\n\n\n</details>", "#### Potential Uses\n\nSince DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model. \n\nThe developers of GPT-2 state in their model card that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including: \n\n> - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*\n> - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*\n> - *Entertainment: Creation of games, chat bots, and amusing generations.*\n\nUsing DistilGPT2, the Hugging Face team built the Write With Transformers web app, which allows users to play with the model to generate text directly from their browser.", "#### Out-of-scope Uses\n\nOpenAI states in the GPT-2 model card: \n\n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n>\n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.", "### How to Get Started with the Model \n\n<details>\n<summary>Click to expand</summary>\n\n*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*\n\nUsing DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:\n\n \n \nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nAnd in TensorFlow:\n\n\n\n</details>", "## Training Data\n\nDistilGPT2 was trained using OpenWebTextCorpus, an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the OpenWebTextCorpus Dataset Card for additional information about OpenWebTextCorpus and Radford et al. (2019) for additional information about WebText.", "## Training Procedure\n\nThe texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in Sanh et al. (2019).", "## Evaluation Results\n\nThe creators of DistilGPT2 report that, on the WikiText-103 benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).", "## Environmental Impact\n\n*Carbon emissions were estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*\n\n- Hardware Type: 8 16GB V100\n- Hours used: 168 (1 week)\n- Cloud Provider: Azure\n- Compute Region: unavailable, assumed East US for calculations\n- Carbon Emitted *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2", "## Glossary\n\n-\t<a name=\"knowledge-distillation\">Knowledge Distillation</a>: As described in Sanh et al. (2019), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see Bucila et al. (2006) and Hinton et al. (2015).\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>" ]
[ "TAGS\n#transformers #pytorch #tf #jax #tflite #rust #coreml #safetensors #gpt2 #text-generation #exbert #en #dataset-openwebtext #arxiv-1910.01108 #arxiv-2201.08542 #arxiv-2203.12574 #arxiv-1910.09700 #arxiv-1503.02531 #license-apache-2.0 #model-index #co2_eq_emissions #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# DistilGPT2\n\nDistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of GPT-2.", "## Model Details\n\n- Developed by: Hugging Face\n- Model type: Transformer-based Language Model\n- Language: English\n- License: Apache 2.0\n- Model Description: DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using knowledge distillation and was designed to be a faster, lighter version of GPT-2.\n- Resources for more information: See this repository for more about Distil\\* (a class of compressed models including Distilled-GPT2), Sanh et al. (2019) for more information about knowledge distillation and the training procedure, and this page for more about GPT-2.", "## Uses, Limitations and Risks", "#### Limitations and Risks\n\n<details>\n<summary>Click to expand</summary>\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nAs the developers of GPT-2 (OpenAI) note in their model card, “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). \n\nDistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.\n\nThe impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example: \n\n- Silva, Tambwekar and Gombolay (2021) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.\n- Xu and Hu (2022) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias). \n- Gupta et al. (2022) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2. \n\n\n\n</details>", "#### Potential Uses\n\nSince DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model. \n\nThe developers of GPT-2 state in their model card that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including: \n\n> - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*\n> - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*\n> - *Entertainment: Creation of games, chat bots, and amusing generations.*\n\nUsing DistilGPT2, the Hugging Face team built the Write With Transformers web app, which allows users to play with the model to generate text directly from their browser.", "#### Out-of-scope Uses\n\nOpenAI states in the GPT-2 model card: \n\n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.\n>\n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.", "### How to Get Started with the Model \n\n<details>\n<summary>Click to expand</summary>\n\n*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*\n\nUsing DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:\n\n \n \nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nAnd in TensorFlow:\n\n\n\n</details>", "## Training Data\n\nDistilGPT2 was trained using OpenWebTextCorpus, an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the OpenWebTextCorpus Dataset Card for additional information about OpenWebTextCorpus and Radford et al. (2019) for additional information about WebText.", "## Training Procedure\n\nThe texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in Sanh et al. (2019).", "## Evaluation Results\n\nThe creators of DistilGPT2 report that, on the WikiText-103 benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).", "## Environmental Impact\n\n*Carbon emissions were estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*\n\n- Hardware Type: 8 16GB V100\n- Hours used: 168 (1 week)\n- Cloud Provider: Azure\n- Compute Region: unavailable, assumed East US for calculations\n- Carbon Emitted *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2", "## Glossary\n\n-\t<a name=\"knowledge-distillation\">Knowledge Distillation</a>: As described in Sanh et al. (2019), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see Bucila et al. (2006) and Hinton et al. (2015).\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>" ]
null
mlx
# mlx-community/MXLewd-L2-20B-4bit This model was converted to MLX format from [`Undi95/MXLewd-L2-20B`](). Refer to the [original model card](https://huggingface.co/Undi95/MXLewd-L2-20B) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/MXLewd-L2-20B-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"license": "cc-by-nc-4.0", "tags": ["mlx"]}
mlx-community/MXLewd-L2-20B-4bit
null
[ "mlx", "safetensors", "llama", "license:cc-by-nc-4.0", "region:us" ]
null
2024-04-27T06:02:31+00:00
[]
[]
TAGS #mlx #safetensors #llama #license-cc-by-nc-4.0 #region-us
# mlx-community/MXLewd-L2-20B-4bit This model was converted to MLX format from ['Undi95/MXLewd-L2-20B'](). Refer to the original model card for more details on the model. ## Use with mlx
[ "# mlx-community/MXLewd-L2-20B-4bit\nThis model was converted to MLX format from ['Undi95/MXLewd-L2-20B']().\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #llama #license-cc-by-nc-4.0 #region-us \n", "# mlx-community/MXLewd-L2-20B-4bit\nThis model was converted to MLX format from ['Undi95/MXLewd-L2-20B']().\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
sophiex/pythia-1b-sft_hh_rlhf
null
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:02:57+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gpt_neox #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Hariharan345/tinyllama-momxchat-v1
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:05:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_1-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.3572 - F1 Score: 0.8540 - Accuracy: 0.855 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5461 | 0.83 | 200 | 0.5206 | 0.7425 | 0.743 | | 0.4884 | 1.67 | 400 | 0.5083 | 0.744 | 0.744 | | 0.4765 | 2.5 | 600 | 0.5088 | 0.7569 | 0.757 | | 0.4668 | 3.33 | 800 | 0.5015 | 0.7396 | 0.74 | | 0.4674 | 4.17 | 1000 | 0.5042 | 0.7445 | 0.746 | | 0.4556 | 5.0 | 1200 | 0.5099 | 0.7517 | 0.753 | | 0.4527 | 5.83 | 1400 | 0.4961 | 0.7490 | 0.749 | | 0.4483 | 6.67 | 1600 | 0.4996 | 0.7504 | 0.751 | | 0.4442 | 7.5 | 1800 | 0.4995 | 0.7598 | 0.76 | | 0.435 | 8.33 | 2000 | 0.5027 | 0.7499 | 0.75 | | 0.4364 | 9.17 | 2200 | 0.5055 | 0.7559 | 0.756 | | 0.4323 | 10.0 | 2400 | 0.5250 | 0.7421 | 0.744 | | 0.4288 | 10.83 | 2600 | 0.5077 | 0.7416 | 0.743 | | 0.4252 | 11.67 | 2800 | 0.5144 | 0.7510 | 0.752 | | 0.4135 | 12.5 | 3000 | 0.5219 | 0.7497 | 0.751 | | 0.422 | 13.33 | 3200 | 0.5150 | 0.7361 | 0.737 | | 0.4098 | 14.17 | 3400 | 0.5238 | 0.7560 | 0.756 | | 0.4104 | 15.0 | 3600 | 0.5316 | 0.7461 | 0.747 | | 0.403 | 15.83 | 3800 | 0.5142 | 0.7455 | 0.746 | | 0.404 | 16.67 | 4000 | 0.5393 | 0.7496 | 0.75 | | 0.3993 | 17.5 | 4200 | 0.5363 | 0.7376 | 0.739 | | 0.391 | 18.33 | 4400 | 0.5484 | 0.7389 | 0.74 | | 0.3958 | 19.17 | 4600 | 0.5428 | 0.7402 | 0.741 | | 0.3903 | 20.0 | 4800 | 0.5299 | 0.7449 | 0.745 | | 0.3883 | 20.83 | 5000 | 0.5338 | 0.7429 | 0.743 | | 0.3821 | 21.67 | 5200 | 0.5431 | 0.7436 | 0.744 | | 0.3772 | 22.5 | 5400 | 0.5500 | 0.7391 | 0.74 | | 0.3793 | 23.33 | 5600 | 0.5558 | 0.7322 | 0.734 | | 0.375 | 24.17 | 5800 | 0.5617 | 0.7370 | 0.738 | | 0.3756 | 25.0 | 6000 | 0.5468 | 0.7349 | 0.735 | | 0.3696 | 25.83 | 6200 | 0.5491 | 0.7346 | 0.735 | | 0.3615 | 26.67 | 6400 | 0.5616 | 0.7440 | 0.744 | | 0.3633 | 27.5 | 6600 | 0.5913 | 0.7408 | 0.741 | | 0.3619 | 28.33 | 6800 | 0.5796 | 0.7369 | 0.737 | | 0.3594 | 29.17 | 7000 | 0.5640 | 0.7359 | 0.736 | | 0.3591 | 30.0 | 7200 | 0.5710 | 0.7379 | 0.738 | | 0.3572 | 30.83 | 7400 | 0.5823 | 0.7269 | 0.728 | | 0.3524 | 31.67 | 7600 | 0.5870 | 0.7349 | 0.735 | | 0.3533 | 32.5 | 7800 | 0.5801 | 0.7348 | 0.735 | | 0.3502 | 33.33 | 8000 | 0.5838 | 0.7294 | 0.73 | | 0.3532 | 34.17 | 8200 | 0.5757 | 0.7389 | 0.739 | | 0.3441 | 35.0 | 8400 | 0.5883 | 0.7328 | 0.733 | | 0.3463 | 35.83 | 8600 | 0.5815 | 0.7278 | 0.728 | | 0.3462 | 36.67 | 8800 | 0.5869 | 0.7277 | 0.728 | | 0.3382 | 37.5 | 9000 | 0.6033 | 0.7240 | 0.725 | | 0.3426 | 38.33 | 9200 | 0.6004 | 0.7287 | 0.729 | | 0.3371 | 39.17 | 9400 | 0.6018 | 0.7327 | 0.733 | | 0.3423 | 40.0 | 9600 | 0.5990 | 0.7277 | 0.728 | | 0.34 | 40.83 | 9800 | 0.5971 | 0.7298 | 0.73 | | 0.3378 | 41.67 | 10000 | 0.5986 | 0.7266 | 0.727 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_1-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_1-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T06:05:36+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_tf\_1-seqsight\_8192\_512\_30M-L32\_f ========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset. It achieves the following results on the evaluation set: * Loss: 0.3572 * F1 Score: 0.8540 * Accuracy: 0.855 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_1-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.3453 - F1 Score: 0.8467 - Accuracy: 0.847 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5564 | 0.83 | 200 | 0.5261 | 0.7419 | 0.742 | | 0.4934 | 1.67 | 400 | 0.5128 | 0.7410 | 0.741 | | 0.4824 | 2.5 | 600 | 0.5134 | 0.7507 | 0.751 | | 0.4741 | 3.33 | 800 | 0.5098 | 0.7365 | 0.737 | | 0.4762 | 4.17 | 1000 | 0.5133 | 0.7387 | 0.74 | | 0.4669 | 5.0 | 1200 | 0.5146 | 0.7455 | 0.747 | | 0.4654 | 5.83 | 1400 | 0.5056 | 0.7405 | 0.741 | | 0.4636 | 6.67 | 1600 | 0.5076 | 0.7384 | 0.739 | | 0.4609 | 7.5 | 1800 | 0.5012 | 0.7420 | 0.742 | | 0.4538 | 8.33 | 2000 | 0.5043 | 0.7394 | 0.74 | | 0.4554 | 9.17 | 2200 | 0.5055 | 0.7548 | 0.755 | | 0.4538 | 10.0 | 2400 | 0.5309 | 0.7361 | 0.739 | | 0.4509 | 10.83 | 2600 | 0.5123 | 0.7422 | 0.744 | | 0.4496 | 11.67 | 2800 | 0.5134 | 0.7388 | 0.741 | | 0.4383 | 12.5 | 3000 | 0.5055 | 0.7491 | 0.75 | | 0.4496 | 13.33 | 3200 | 0.5057 | 0.7433 | 0.745 | | 0.4409 | 14.17 | 3400 | 0.4966 | 0.752 | 0.752 | | 0.4385 | 15.0 | 3600 | 0.5030 | 0.7558 | 0.757 | | 0.4371 | 15.83 | 3800 | 0.4960 | 0.7544 | 0.755 | | 0.4385 | 16.67 | 4000 | 0.5045 | 0.7574 | 0.758 | | 0.4347 | 17.5 | 4200 | 0.5035 | 0.7507 | 0.752 | | 0.429 | 18.33 | 4400 | 0.5085 | 0.7593 | 0.76 | | 0.4354 | 19.17 | 4600 | 0.5055 | 0.7481 | 0.749 | | 0.4323 | 20.0 | 4800 | 0.4935 | 0.7597 | 0.76 | | 0.4319 | 20.83 | 5000 | 0.4992 | 0.7537 | 0.754 | | 0.4267 | 21.67 | 5200 | 0.4983 | 0.7575 | 0.758 | | 0.4249 | 22.5 | 5400 | 0.4994 | 0.7468 | 0.747 | | 0.4265 | 23.33 | 5600 | 0.5038 | 0.7470 | 0.748 | | 0.4253 | 24.17 | 5800 | 0.5070 | 0.7510 | 0.752 | | 0.4262 | 25.0 | 6000 | 0.4912 | 0.7510 | 0.751 | | 0.424 | 25.83 | 6200 | 0.4955 | 0.7597 | 0.76 | | 0.4191 | 26.67 | 6400 | 0.4953 | 0.7620 | 0.762 | | 0.4231 | 27.5 | 6600 | 0.5051 | 0.7638 | 0.764 | | 0.4192 | 28.33 | 6800 | 0.4985 | 0.7497 | 0.75 | | 0.4207 | 29.17 | 7000 | 0.4991 | 0.7488 | 0.749 | | 0.4207 | 30.0 | 7200 | 0.4955 | 0.7517 | 0.752 | | 0.4191 | 30.83 | 7400 | 0.5034 | 0.7482 | 0.749 | | 0.4166 | 31.67 | 7600 | 0.4966 | 0.7528 | 0.753 | | 0.4186 | 32.5 | 7800 | 0.4978 | 0.7528 | 0.753 | | 0.4165 | 33.33 | 8000 | 0.4988 | 0.7518 | 0.752 | | 0.4204 | 34.17 | 8200 | 0.4949 | 0.7487 | 0.749 | | 0.413 | 35.0 | 8400 | 0.4975 | 0.7508 | 0.751 | | 0.417 | 35.83 | 8600 | 0.4952 | 0.7478 | 0.748 | | 0.4172 | 36.67 | 8800 | 0.4971 | 0.7467 | 0.747 | | 0.4101 | 37.5 | 9000 | 0.5015 | 0.7530 | 0.754 | | 0.4141 | 38.33 | 9200 | 0.4980 | 0.7517 | 0.752 | | 0.4116 | 39.17 | 9400 | 0.4992 | 0.7517 | 0.752 | | 0.4143 | 40.0 | 9600 | 0.4989 | 0.7507 | 0.751 | | 0.4135 | 40.83 | 9800 | 0.4982 | 0.7508 | 0.751 | | 0.4122 | 41.67 | 10000 | 0.4985 | 0.7516 | 0.752 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_1-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_1-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T06:05:36+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_tf\_1-seqsight\_8192\_512\_30M-L8\_f ========================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset. It achieves the following results on the evaluation set: * Loss: 0.3453 * F1 Score: 0.8467 * Accuracy: 0.847 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_4-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.3424 - F1 Score: 0.8517 - Accuracy: 0.852 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5854 | 1.34 | 200 | 0.5537 | 0.7169 | 0.717 | | 0.5051 | 2.68 | 400 | 0.5201 | 0.7266 | 0.727 | | 0.4902 | 4.03 | 600 | 0.5080 | 0.7427 | 0.743 | | 0.4742 | 5.37 | 800 | 0.5060 | 0.7509 | 0.751 | | 0.4624 | 6.71 | 1000 | 0.4862 | 0.7540 | 0.754 | | 0.4554 | 8.05 | 1200 | 0.4918 | 0.7601 | 0.761 | | 0.4482 | 9.4 | 1400 | 0.4795 | 0.7689 | 0.769 | | 0.4448 | 10.74 | 1600 | 0.4757 | 0.7639 | 0.764 | | 0.4376 | 12.08 | 1800 | 0.4773 | 0.7739 | 0.774 | | 0.4382 | 13.42 | 2000 | 0.4706 | 0.7617 | 0.762 | | 0.4265 | 14.77 | 2200 | 0.4875 | 0.7599 | 0.761 | | 0.4297 | 16.11 | 2400 | 0.4678 | 0.7730 | 0.773 | | 0.4246 | 17.45 | 2600 | 0.4689 | 0.7749 | 0.775 | | 0.4242 | 18.79 | 2800 | 0.4708 | 0.7727 | 0.773 | | 0.4251 | 20.13 | 3000 | 0.4730 | 0.7694 | 0.77 | | 0.4188 | 21.48 | 3200 | 0.4637 | 0.7739 | 0.774 | | 0.4162 | 22.82 | 3400 | 0.4657 | 0.7729 | 0.773 | | 0.416 | 24.16 | 3600 | 0.4613 | 0.7730 | 0.773 | | 0.4182 | 25.5 | 3800 | 0.4592 | 0.7840 | 0.784 | | 0.4112 | 26.85 | 4000 | 0.4655 | 0.7747 | 0.775 | | 0.4128 | 28.19 | 4200 | 0.4651 | 0.7738 | 0.774 | | 0.4061 | 29.53 | 4400 | 0.4662 | 0.7788 | 0.779 | | 0.4098 | 30.87 | 4600 | 0.4586 | 0.7809 | 0.781 | | 0.4102 | 32.21 | 4800 | 0.4567 | 0.7819 | 0.782 | | 0.4037 | 33.56 | 5000 | 0.4619 | 0.7840 | 0.784 | | 0.407 | 34.9 | 5200 | 0.4613 | 0.7850 | 0.785 | | 0.4086 | 36.24 | 5400 | 0.4580 | 0.784 | 0.784 | | 0.4021 | 37.58 | 5600 | 0.4589 | 0.7820 | 0.782 | | 0.4039 | 38.93 | 5800 | 0.4641 | 0.7767 | 0.777 | | 0.4008 | 40.27 | 6000 | 0.4613 | 0.7800 | 0.78 | | 0.4015 | 41.61 | 6200 | 0.4617 | 0.7798 | 0.78 | | 0.4019 | 42.95 | 6400 | 0.4610 | 0.7848 | 0.785 | | 0.403 | 44.3 | 6600 | 0.4558 | 0.7860 | 0.786 | | 0.3985 | 45.64 | 6800 | 0.4609 | 0.7878 | 0.788 | | 0.4003 | 46.98 | 7000 | 0.4631 | 0.7847 | 0.785 | | 0.4027 | 48.32 | 7200 | 0.4612 | 0.7817 | 0.782 | | 0.3962 | 49.66 | 7400 | 0.4619 | 0.7825 | 0.783 | | 0.3925 | 51.01 | 7600 | 0.4575 | 0.7829 | 0.783 | | 0.3959 | 52.35 | 7800 | 0.4566 | 0.79 | 0.79 | | 0.3929 | 53.69 | 8000 | 0.4631 | 0.7826 | 0.783 | | 0.3971 | 55.03 | 8200 | 0.4689 | 0.7783 | 0.779 | | 0.3944 | 56.38 | 8400 | 0.4611 | 0.7827 | 0.783 | | 0.3944 | 57.72 | 8600 | 0.4564 | 0.7900 | 0.79 | | 0.3948 | 59.06 | 8800 | 0.4602 | 0.7807 | 0.781 | | 0.3919 | 60.4 | 9000 | 0.4594 | 0.7808 | 0.781 | | 0.3945 | 61.74 | 9200 | 0.4573 | 0.7829 | 0.783 | | 0.3947 | 63.09 | 9400 | 0.4594 | 0.7778 | 0.778 | | 0.395 | 64.43 | 9600 | 0.4566 | 0.7829 | 0.783 | | 0.39 | 65.77 | 9800 | 0.4578 | 0.7809 | 0.781 | | 0.3899 | 67.11 | 10000 | 0.4582 | 0.7809 | 0.781 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_4-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_4-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T06:06:17+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_tf\_4-seqsight\_8192\_512\_30M-L1\_f ========================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset. It achieves the following results on the evaluation set: * Loss: 0.3424 * F1 Score: 0.8517 * Accuracy: 0.852 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
nanxiangzifeng/test
null
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:06:29+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_4-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.3598 - F1 Score: 0.8480 - Accuracy: 0.848 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5483 | 1.34 | 200 | 0.5224 | 0.7330 | 0.734 | | 0.472 | 2.68 | 400 | 0.4887 | 0.7609 | 0.761 | | 0.4543 | 4.03 | 600 | 0.4795 | 0.7639 | 0.764 | | 0.4372 | 5.37 | 800 | 0.4842 | 0.7695 | 0.77 | | 0.4264 | 6.71 | 1000 | 0.4702 | 0.778 | 0.778 | | 0.4206 | 8.05 | 1200 | 0.4753 | 0.7693 | 0.77 | | 0.414 | 9.4 | 1400 | 0.4646 | 0.7717 | 0.772 | | 0.4068 | 10.74 | 1600 | 0.4649 | 0.7785 | 0.779 | | 0.4021 | 12.08 | 1800 | 0.4631 | 0.7840 | 0.784 | | 0.3979 | 13.42 | 2000 | 0.4624 | 0.7739 | 0.775 | | 0.387 | 14.77 | 2200 | 0.4719 | 0.7798 | 0.781 | | 0.3869 | 16.11 | 2400 | 0.4515 | 0.7790 | 0.779 | | 0.3779 | 17.45 | 2600 | 0.4681 | 0.7760 | 0.777 | | 0.3785 | 18.79 | 2800 | 0.4608 | 0.7838 | 0.784 | | 0.3752 | 20.13 | 3000 | 0.4694 | 0.7787 | 0.78 | | 0.3677 | 21.48 | 3200 | 0.4535 | 0.7949 | 0.795 | | 0.3626 | 22.82 | 3400 | 0.4574 | 0.7979 | 0.798 | | 0.3594 | 24.16 | 3600 | 0.4475 | 0.7980 | 0.798 | | 0.3547 | 25.5 | 3800 | 0.4535 | 0.7910 | 0.791 | | 0.3476 | 26.85 | 4000 | 0.4552 | 0.7998 | 0.8 | | 0.3481 | 28.19 | 4200 | 0.4633 | 0.7926 | 0.793 | | 0.3391 | 29.53 | 4400 | 0.4584 | 0.7988 | 0.799 | | 0.3389 | 30.87 | 4600 | 0.4667 | 0.7949 | 0.796 | | 0.3374 | 32.21 | 4800 | 0.4561 | 0.7965 | 0.797 | | 0.3307 | 33.56 | 5000 | 0.4695 | 0.7985 | 0.799 | | 0.3335 | 34.9 | 5200 | 0.4568 | 0.8008 | 0.801 | | 0.3299 | 36.24 | 5400 | 0.4493 | 0.7989 | 0.799 | | 0.3214 | 37.58 | 5600 | 0.4522 | 0.8027 | 0.803 | | 0.3222 | 38.93 | 5800 | 0.4559 | 0.7958 | 0.796 | | 0.3172 | 40.27 | 6000 | 0.4492 | 0.7939 | 0.794 | | 0.3139 | 41.61 | 6200 | 0.4699 | 0.7957 | 0.796 | | 0.3151 | 42.95 | 6400 | 0.4662 | 0.7943 | 0.795 | | 0.3146 | 44.3 | 6600 | 0.4521 | 0.8029 | 0.803 | | 0.3088 | 45.64 | 6800 | 0.4535 | 0.7968 | 0.797 | | 0.3066 | 46.98 | 7000 | 0.4643 | 0.7965 | 0.797 | | 0.3064 | 48.32 | 7200 | 0.4512 | 0.8049 | 0.805 | | 0.3033 | 49.66 | 7400 | 0.4592 | 0.8007 | 0.801 | | 0.3024 | 51.01 | 7600 | 0.4569 | 0.8006 | 0.801 | | 0.2991 | 52.35 | 7800 | 0.4457 | 0.8140 | 0.814 | | 0.2948 | 53.69 | 8000 | 0.4808 | 0.7932 | 0.794 | | 0.2969 | 55.03 | 8200 | 0.4788 | 0.7901 | 0.791 | | 0.2953 | 56.38 | 8400 | 0.4647 | 0.8027 | 0.803 | | 0.2946 | 57.72 | 8600 | 0.4582 | 0.8058 | 0.806 | | 0.2931 | 59.06 | 8800 | 0.4634 | 0.8017 | 0.802 | | 0.2901 | 60.4 | 9000 | 0.4639 | 0.8068 | 0.807 | | 0.2909 | 61.74 | 9200 | 0.4583 | 0.8080 | 0.808 | | 0.2918 | 63.09 | 9400 | 0.4634 | 0.8037 | 0.804 | | 0.2897 | 64.43 | 9600 | 0.4629 | 0.8047 | 0.805 | | 0.286 | 65.77 | 9800 | 0.4610 | 0.8098 | 0.81 | | 0.2892 | 67.11 | 10000 | 0.4608 | 0.8098 | 0.81 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_4-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_4-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T06:06:55+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_tf\_4-seqsight\_8192\_512\_30M-L8\_f ========================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset. It achieves the following results on the evaluation set: * Loss: 0.3598 * F1 Score: 0.8480 * Accuracy: 0.848 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
uniiiii/wav2vec2-base-timit-demo-colab
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:08:28+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
Quantizations of https://huggingface.co/Nexusflow/Starling-LM-7B-beta # From original readme ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> **Important: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.** Our model follows the exact chat template and usage as [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106). Please refer to their model card for more details. In addition, our model is hosted on LMSYS [Chatbot Arena](https://chat.lmsys.org) for free test. The conversation template is the same as Openchat-3.5-0106: ``` import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat-3.5-0106") # Single-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Multi-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Coding Mode tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747] ``` ## Code Examples ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("Nexusflow/Starling-LM-7B-beta") model = transformers.AutoModelForCausalLM.from_pretrained("Nexusflow/Starling-LM-7B-beta") def generate_response(prompt): input_ids = tokenizer(prompt, return_tensors="pt").input_ids outputs = model.generate( input_ids, max_length=256, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id, ) response_ids = outputs[0] response_text = tokenizer.decode(response_ids, skip_special_tokens=True) return response_text # Single-turn conversation prompt = "Hello, how are you?" single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:" response_text = generate_response(single_turn_prompt) print("Response:", response_text) ## Multi-turn conversation prompt = "Hello" follow_up_question = "How are you today?" response = "" multi_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: {response}<|end_of_turn|>GPT4 Correct User: {follow_up_question}<|end_of_turn|>GPT4 Correct Assistant:" response_text = generate_response(multi_turn_prompt) print("Multi-turn conversation response:", response_text) ### Coding conversation prompt = "Implement quicksort using C++" coding_prompt = f"Code User: {prompt}<|end_of_turn|>Code Assistant:" response = generate_response(coding_prompt) print("Coding conversation response:", response) ```
{"language": ["en"], "license": "other", "tags": ["transformers", "gguf", "imatrix", "Starling-LM-7B-beta"], "pipeline_tag": "text-generation", "inference": false}
duyntnet/Starling-LM-7B-beta-imatrix-GGUF
null
[ "transformers", "gguf", "imatrix", "Starling-LM-7B-beta", "text-generation", "en", "license:other", "region:us" ]
null
2024-04-27T06:09:05+00:00
[]
[ "en" ]
TAGS #transformers #gguf #imatrix #Starling-LM-7B-beta #text-generation #en #license-other #region-us
Quantizations of URL # From original readme ## Uses Important: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less. Our model follows the exact chat template and usage as Openchat-3.5-0106. Please refer to their model card for more details. In addition, our model is hosted on LMSYS Chatbot Arena for free test. The conversation template is the same as Openchat-3.5-0106: ## Code Examples
[ "# From original readme", "## Uses\n\n\n\nImportant: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.\n\nOur model follows the exact chat template and usage as Openchat-3.5-0106. Please refer to their model card for more details.\nIn addition, our model is hosted on LMSYS Chatbot Arena for free test.\n\nThe conversation template is the same as Openchat-3.5-0106:", "## Code Examples" ]
[ "TAGS\n#transformers #gguf #imatrix #Starling-LM-7B-beta #text-generation #en #license-other #region-us \n", "# From original readme", "## Uses\n\n\n\nImportant: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.\n\nOur model follows the exact chat template and usage as Openchat-3.5-0106. Please refer to their model card for more details.\nIn addition, our model is hosted on LMSYS Chatbot Arena for free test.\n\nThe conversation template is the same as Openchat-3.5-0106:", "## Code Examples" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_4-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.5419 - F1 Score: 0.8429 - Accuracy: 0.843 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5298 | 1.34 | 200 | 0.5022 | 0.7565 | 0.757 | | 0.4547 | 2.68 | 400 | 0.4890 | 0.7627 | 0.764 | | 0.4374 | 4.03 | 600 | 0.4695 | 0.7688 | 0.769 | | 0.417 | 5.37 | 800 | 0.4740 | 0.7803 | 0.781 | | 0.4019 | 6.71 | 1000 | 0.4525 | 0.7909 | 0.791 | | 0.3911 | 8.05 | 1200 | 0.4531 | 0.7927 | 0.793 | | 0.3802 | 9.4 | 1400 | 0.4492 | 0.7999 | 0.8 | | 0.3654 | 10.74 | 1600 | 0.4430 | 0.8068 | 0.807 | | 0.3567 | 12.08 | 1800 | 0.4510 | 0.8098 | 0.81 | | 0.3443 | 13.42 | 2000 | 0.4679 | 0.7884 | 0.79 | | 0.3297 | 14.77 | 2200 | 0.4379 | 0.8086 | 0.809 | | 0.3209 | 16.11 | 2400 | 0.4293 | 0.8140 | 0.814 | | 0.3056 | 17.45 | 2600 | 0.4517 | 0.8065 | 0.807 | | 0.2973 | 18.79 | 2800 | 0.4328 | 0.8200 | 0.82 | | 0.2904 | 20.13 | 3000 | 0.4694 | 0.7990 | 0.8 | | 0.2822 | 21.48 | 3200 | 0.4324 | 0.8220 | 0.822 | | 0.2649 | 22.82 | 3400 | 0.4480 | 0.8199 | 0.82 | | 0.2603 | 24.16 | 3600 | 0.4315 | 0.826 | 0.826 | | 0.25 | 25.5 | 3800 | 0.4434 | 0.8290 | 0.829 | | 0.2421 | 26.85 | 4000 | 0.4351 | 0.8370 | 0.837 | | 0.2383 | 28.19 | 4200 | 0.4811 | 0.8113 | 0.812 | | 0.2286 | 29.53 | 4400 | 0.4528 | 0.8419 | 0.842 | | 0.2263 | 30.87 | 4600 | 0.4559 | 0.8269 | 0.827 | | 0.2144 | 32.21 | 4800 | 0.4749 | 0.8309 | 0.831 | | 0.2087 | 33.56 | 5000 | 0.4811 | 0.8400 | 0.84 | | 0.209 | 34.9 | 5200 | 0.4559 | 0.8390 | 0.839 | | 0.2005 | 36.24 | 5400 | 0.4649 | 0.8510 | 0.851 | | 0.1936 | 37.58 | 5600 | 0.4457 | 0.8470 | 0.847 | | 0.1885 | 38.93 | 5800 | 0.4884 | 0.8449 | 0.845 | | 0.1823 | 40.27 | 6000 | 0.4702 | 0.8519 | 0.852 | | 0.1812 | 41.61 | 6200 | 0.4743 | 0.8450 | 0.845 | | 0.1769 | 42.95 | 6400 | 0.4743 | 0.8530 | 0.853 | | 0.1747 | 44.3 | 6600 | 0.4964 | 0.8560 | 0.856 | | 0.1684 | 45.64 | 6800 | 0.4925 | 0.8530 | 0.853 | | 0.1649 | 46.98 | 7000 | 0.4920 | 0.8550 | 0.855 | | 0.1642 | 48.32 | 7200 | 0.4878 | 0.8590 | 0.859 | | 0.1606 | 49.66 | 7400 | 0.4807 | 0.8550 | 0.855 | | 0.1583 | 51.01 | 7600 | 0.4972 | 0.8560 | 0.856 | | 0.1553 | 52.35 | 7800 | 0.5003 | 0.8570 | 0.857 | | 0.1473 | 53.69 | 8000 | 0.5045 | 0.8580 | 0.858 | | 0.1492 | 55.03 | 8200 | 0.5266 | 0.8560 | 0.856 | | 0.1442 | 56.38 | 8400 | 0.5160 | 0.858 | 0.858 | | 0.1469 | 57.72 | 8600 | 0.5068 | 0.8560 | 0.856 | | 0.1392 | 59.06 | 8800 | 0.5262 | 0.8540 | 0.854 | | 0.1418 | 60.4 | 9000 | 0.5185 | 0.8560 | 0.856 | | 0.1414 | 61.74 | 9200 | 0.5193 | 0.8570 | 0.857 | | 0.1344 | 63.09 | 9400 | 0.5241 | 0.8560 | 0.856 | | 0.138 | 64.43 | 9600 | 0.5215 | 0.8520 | 0.852 | | 0.1358 | 65.77 | 9800 | 0.5252 | 0.8590 | 0.859 | | 0.133 | 67.11 | 10000 | 0.5244 | 0.8600 | 0.86 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_4-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_4-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T06:16:12+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_tf\_4-seqsight\_8192\_512\_30M-L32\_f ========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset. It achieves the following results on the evaluation set: * Loss: 0.5419 * F1 Score: 0.8429 * Accuracy: 0.843 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_3-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.5449 - F1 Score: 0.7169 - Accuracy: 0.719 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6467 | 0.93 | 200 | 0.5802 | 0.6970 | 0.697 | | 0.6083 | 1.87 | 400 | 0.5770 | 0.6976 | 0.698 | | 0.5971 | 2.8 | 600 | 0.5605 | 0.7075 | 0.71 | | 0.5911 | 3.74 | 800 | 0.5674 | 0.7031 | 0.703 | | 0.5882 | 4.67 | 1000 | 0.5624 | 0.7079 | 0.708 | | 0.5847 | 5.61 | 1200 | 0.5655 | 0.7009 | 0.701 | | 0.5793 | 6.54 | 1400 | 0.5616 | 0.7069 | 0.707 | | 0.5799 | 7.48 | 1600 | 0.5653 | 0.6941 | 0.694 | | 0.5761 | 8.41 | 1800 | 0.5666 | 0.6910 | 0.691 | | 0.5804 | 9.35 | 2000 | 0.5602 | 0.7049 | 0.705 | | 0.5732 | 10.28 | 2200 | 0.5661 | 0.6960 | 0.696 | | 0.5722 | 11.21 | 2400 | 0.5587 | 0.7025 | 0.703 | | 0.5725 | 12.15 | 2600 | 0.5505 | 0.7104 | 0.713 | | 0.5685 | 13.08 | 2800 | 0.5540 | 0.7074 | 0.709 | | 0.5701 | 14.02 | 3000 | 0.5515 | 0.7068 | 0.708 | | 0.5692 | 14.95 | 3200 | 0.5517 | 0.7037 | 0.705 | | 0.5678 | 15.89 | 3400 | 0.5511 | 0.7025 | 0.703 | | 0.5654 | 16.82 | 3600 | 0.5562 | 0.6989 | 0.699 | | 0.5647 | 17.76 | 3800 | 0.5499 | 0.7058 | 0.707 | | 0.5657 | 18.69 | 4000 | 0.5540 | 0.7049 | 0.705 | | 0.5623 | 19.63 | 4200 | 0.5523 | 0.7000 | 0.704 | | 0.5647 | 20.56 | 4400 | 0.5500 | 0.7035 | 0.705 | | 0.5615 | 21.5 | 4600 | 0.5620 | 0.6965 | 0.697 | | 0.5596 | 22.43 | 4800 | 0.5545 | 0.7046 | 0.705 | | 0.5639 | 23.36 | 5000 | 0.5541 | 0.6960 | 0.696 | | 0.561 | 24.3 | 5200 | 0.5589 | 0.6879 | 0.688 | | 0.5563 | 25.23 | 5400 | 0.5528 | 0.7071 | 0.709 | | 0.5629 | 26.17 | 5600 | 0.5498 | 0.7035 | 0.704 | | 0.5544 | 27.1 | 5800 | 0.5487 | 0.7110 | 0.713 | | 0.5561 | 28.04 | 6000 | 0.5506 | 0.7045 | 0.705 | | 0.5545 | 28.97 | 6200 | 0.5551 | 0.6971 | 0.697 | | 0.5585 | 29.91 | 6400 | 0.5513 | 0.6987 | 0.699 | | 0.5568 | 30.84 | 6600 | 0.5506 | 0.7056 | 0.706 | | 0.5548 | 31.78 | 6800 | 0.5540 | 0.702 | 0.702 | | 0.5545 | 32.71 | 7000 | 0.5514 | 0.7054 | 0.706 | | 0.5582 | 33.64 | 7200 | 0.5486 | 0.7001 | 0.701 | | 0.5502 | 34.58 | 7400 | 0.5543 | 0.6971 | 0.697 | | 0.558 | 35.51 | 7600 | 0.5483 | 0.7028 | 0.703 | | 0.5565 | 36.45 | 7800 | 0.5519 | 0.6999 | 0.7 | | 0.5552 | 37.38 | 8000 | 0.5486 | 0.7018 | 0.702 | | 0.5502 | 38.32 | 8200 | 0.5507 | 0.6990 | 0.7 | | 0.5546 | 39.25 | 8400 | 0.5517 | 0.7107 | 0.711 | | 0.5534 | 40.19 | 8600 | 0.5504 | 0.7084 | 0.709 | | 0.5525 | 41.12 | 8800 | 0.5502 | 0.7086 | 0.709 | | 0.5524 | 42.06 | 9000 | 0.5508 | 0.7056 | 0.706 | | 0.5529 | 42.99 | 9200 | 0.5511 | 0.7069 | 0.707 | | 0.5515 | 43.93 | 9400 | 0.5527 | 0.7040 | 0.704 | | 0.5509 | 44.86 | 9600 | 0.5508 | 0.7068 | 0.707 | | 0.554 | 45.79 | 9800 | 0.5511 | 0.7068 | 0.707 | | 0.5475 | 46.73 | 10000 | 0.5519 | 0.7038 | 0.704 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_3-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_3-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T06:16:12+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_tf\_3-seqsight\_8192\_512\_30M-L1\_f ========================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset. It achieves the following results on the evaluation set: * Loss: 0.5449 * F1 Score: 0.7169 * Accuracy: 0.719 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_3-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.5253 - F1 Score: 0.7283 - Accuracy: 0.73 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6224 | 0.93 | 200 | 0.5583 | 0.7092 | 0.712 | | 0.5873 | 1.87 | 400 | 0.5816 | 0.6804 | 0.684 | | 0.5777 | 2.8 | 600 | 0.5465 | 0.7045 | 0.705 | | 0.5694 | 3.74 | 800 | 0.5644 | 0.6925 | 0.693 | | 0.5634 | 4.67 | 1000 | 0.5486 | 0.7019 | 0.702 | | 0.5573 | 5.61 | 1200 | 0.5394 | 0.7164 | 0.72 | | 0.5498 | 6.54 | 1400 | 0.5508 | 0.6930 | 0.693 | | 0.5461 | 7.48 | 1600 | 0.5399 | 0.7098 | 0.71 | | 0.539 | 8.41 | 1800 | 0.5401 | 0.7089 | 0.709 | | 0.5408 | 9.35 | 2000 | 0.5442 | 0.7122 | 0.714 | | 0.5303 | 10.28 | 2200 | 0.5315 | 0.7169 | 0.717 | | 0.5259 | 11.21 | 2400 | 0.5553 | 0.7148 | 0.715 | | 0.5175 | 12.15 | 2600 | 0.5496 | 0.7211 | 0.724 | | 0.5134 | 13.08 | 2800 | 0.5447 | 0.7139 | 0.717 | | 0.5102 | 14.02 | 3000 | 0.5330 | 0.7248 | 0.725 | | 0.5038 | 14.95 | 3200 | 0.5366 | 0.7201 | 0.721 | | 0.5009 | 15.89 | 3400 | 0.5310 | 0.7278 | 0.728 | | 0.4952 | 16.82 | 3600 | 0.5506 | 0.7161 | 0.716 | | 0.4919 | 17.76 | 3800 | 0.5353 | 0.7388 | 0.739 | | 0.4871 | 18.69 | 4000 | 0.5521 | 0.71 | 0.71 | | 0.4785 | 19.63 | 4200 | 0.5350 | 0.7376 | 0.738 | | 0.4785 | 20.56 | 4400 | 0.5581 | 0.7181 | 0.718 | | 0.4698 | 21.5 | 4600 | 0.5795 | 0.7015 | 0.702 | | 0.4645 | 22.43 | 4800 | 0.5629 | 0.7243 | 0.725 | | 0.464 | 23.36 | 5000 | 0.5929 | 0.7088 | 0.709 | | 0.4578 | 24.3 | 5200 | 0.5819 | 0.7021 | 0.703 | | 0.4504 | 25.23 | 5400 | 0.6046 | 0.7011 | 0.701 | | 0.454 | 26.17 | 5600 | 0.5637 | 0.7189 | 0.719 | | 0.445 | 27.1 | 5800 | 0.5777 | 0.7151 | 0.715 | | 0.4441 | 28.04 | 6000 | 0.5787 | 0.7029 | 0.703 | | 0.4376 | 28.97 | 6200 | 0.5924 | 0.7131 | 0.713 | | 0.4383 | 29.91 | 6400 | 0.5811 | 0.7180 | 0.718 | | 0.4348 | 30.84 | 6600 | 0.5807 | 0.7061 | 0.706 | | 0.4307 | 31.78 | 6800 | 0.5864 | 0.7069 | 0.707 | | 0.4262 | 32.71 | 7000 | 0.5827 | 0.7080 | 0.708 | | 0.4272 | 33.64 | 7200 | 0.5802 | 0.7069 | 0.707 | | 0.4171 | 34.58 | 7400 | 0.6025 | 0.7005 | 0.702 | | 0.4225 | 35.51 | 7600 | 0.5901 | 0.7107 | 0.711 | | 0.4195 | 36.45 | 7800 | 0.6142 | 0.712 | 0.712 | | 0.4165 | 37.38 | 8000 | 0.6216 | 0.7058 | 0.706 | | 0.4121 | 38.32 | 8200 | 0.6197 | 0.7081 | 0.708 | | 0.4092 | 39.25 | 8400 | 0.6197 | 0.7109 | 0.711 | | 0.4064 | 40.19 | 8600 | 0.6171 | 0.7039 | 0.704 | | 0.4048 | 41.12 | 8800 | 0.6202 | 0.7101 | 0.71 | | 0.4053 | 42.06 | 9000 | 0.6268 | 0.6980 | 0.698 | | 0.4027 | 42.99 | 9200 | 0.6163 | 0.7049 | 0.705 | | 0.4018 | 43.93 | 9400 | 0.6286 | 0.7048 | 0.705 | | 0.3973 | 44.86 | 9600 | 0.6287 | 0.7050 | 0.705 | | 0.4001 | 45.79 | 9800 | 0.6281 | 0.7060 | 0.706 | | 0.3952 | 46.73 | 10000 | 0.6272 | 0.7090 | 0.709 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_3-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_3-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T06:16:29+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_tf\_3-seqsight\_8192\_512\_30M-L32\_f ========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset. It achieves the following results on the evaluation set: * Loss: 0.5253 * F1 Score: 0.7283 * Accuracy: 0.73 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_3-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.5216 - F1 Score: 0.7278 - Accuracy: 0.729 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6304 | 0.93 | 200 | 0.5663 | 0.6962 | 0.697 | | 0.5938 | 1.87 | 400 | 0.5883 | 0.6789 | 0.682 | | 0.5851 | 2.8 | 600 | 0.5496 | 0.7129 | 0.715 | | 0.5787 | 3.74 | 800 | 0.5671 | 0.6956 | 0.696 | | 0.5754 | 4.67 | 1000 | 0.5540 | 0.7041 | 0.704 | | 0.5706 | 5.61 | 1200 | 0.5475 | 0.6985 | 0.699 | | 0.5638 | 6.54 | 1400 | 0.5516 | 0.7010 | 0.701 | | 0.5629 | 7.48 | 1600 | 0.5494 | 0.7051 | 0.705 | | 0.5583 | 8.41 | 1800 | 0.5522 | 0.6981 | 0.698 | | 0.5629 | 9.35 | 2000 | 0.5488 | 0.7014 | 0.703 | | 0.5536 | 10.28 | 2200 | 0.5497 | 0.7060 | 0.706 | | 0.5516 | 11.21 | 2400 | 0.5589 | 0.7027 | 0.703 | | 0.5508 | 12.15 | 2600 | 0.5410 | 0.7070 | 0.71 | | 0.545 | 13.08 | 2800 | 0.5533 | 0.7074 | 0.712 | | 0.5459 | 14.02 | 3000 | 0.5426 | 0.7043 | 0.705 | | 0.5429 | 14.95 | 3200 | 0.5418 | 0.7083 | 0.711 | | 0.5423 | 15.89 | 3400 | 0.5361 | 0.7122 | 0.713 | | 0.5388 | 16.82 | 3600 | 0.5499 | 0.7093 | 0.71 | | 0.5381 | 17.76 | 3800 | 0.5418 | 0.7059 | 0.708 | | 0.5374 | 18.69 | 4000 | 0.5519 | 0.7041 | 0.704 | | 0.5325 | 19.63 | 4200 | 0.5406 | 0.7118 | 0.715 | | 0.5342 | 20.56 | 4400 | 0.5422 | 0.7053 | 0.706 | | 0.5281 | 21.5 | 4600 | 0.5574 | 0.6975 | 0.698 | | 0.5259 | 22.43 | 4800 | 0.5524 | 0.7069 | 0.708 | | 0.5313 | 23.36 | 5000 | 0.5647 | 0.7020 | 0.702 | | 0.5252 | 24.3 | 5200 | 0.5607 | 0.7050 | 0.706 | | 0.5197 | 25.23 | 5400 | 0.5651 | 0.7112 | 0.712 | | 0.5261 | 26.17 | 5600 | 0.5460 | 0.7165 | 0.717 | | 0.5185 | 27.1 | 5800 | 0.5513 | 0.7096 | 0.71 | | 0.519 | 28.04 | 6000 | 0.5565 | 0.7080 | 0.708 | | 0.5155 | 28.97 | 6200 | 0.5603 | 0.7081 | 0.708 | | 0.5191 | 29.91 | 6400 | 0.5500 | 0.7175 | 0.718 | | 0.5181 | 30.84 | 6600 | 0.5504 | 0.7119 | 0.712 | | 0.5134 | 31.78 | 6800 | 0.5602 | 0.7051 | 0.705 | | 0.5147 | 32.71 | 7000 | 0.5548 | 0.7119 | 0.712 | | 0.5155 | 33.64 | 7200 | 0.5516 | 0.7051 | 0.705 | | 0.5056 | 34.58 | 7400 | 0.5622 | 0.6995 | 0.7 | | 0.5148 | 35.51 | 7600 | 0.5555 | 0.7081 | 0.708 | | 0.5112 | 36.45 | 7800 | 0.5629 | 0.7060 | 0.706 | | 0.5112 | 37.38 | 8000 | 0.5522 | 0.7091 | 0.709 | | 0.5062 | 38.32 | 8200 | 0.5634 | 0.7090 | 0.709 | | 0.5075 | 39.25 | 8400 | 0.5607 | 0.7011 | 0.701 | | 0.5055 | 40.19 | 8600 | 0.5566 | 0.7061 | 0.706 | | 0.5047 | 41.12 | 8800 | 0.5585 | 0.7090 | 0.709 | | 0.5065 | 42.06 | 9000 | 0.5600 | 0.7080 | 0.708 | | 0.5049 | 42.99 | 9200 | 0.5601 | 0.7021 | 0.702 | | 0.5049 | 43.93 | 9400 | 0.5579 | 0.7071 | 0.707 | | 0.5032 | 44.86 | 9600 | 0.5576 | 0.7081 | 0.708 | | 0.5063 | 45.79 | 9800 | 0.5600 | 0.7071 | 0.707 | | 0.5001 | 46.73 | 10000 | 0.5618 | 0.7061 | 0.706 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_3-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_3-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T06:16:30+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_tf\_3-seqsight\_8192\_512\_30M-L8\_f ========================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset. It achieves the following results on the evaluation set: * Loss: 0.5216 * F1 Score: 0.7278 * Accuracy: 0.729 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
swj0419/bbc_retrain_new_STEP0000050
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:17:04+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_2-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.4464 - F1 Score: 0.7959 - Accuracy: 0.796 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5932 | 1.34 | 200 | 0.5470 | 0.7153 | 0.718 | | 0.5448 | 2.68 | 400 | 0.5315 | 0.7370 | 0.737 | | 0.5311 | 4.03 | 600 | 0.5281 | 0.7360 | 0.736 | | 0.5254 | 5.37 | 800 | 0.5206 | 0.7400 | 0.74 | | 0.522 | 6.71 | 1000 | 0.5165 | 0.7498 | 0.75 | | 0.516 | 8.05 | 1200 | 0.5191 | 0.7451 | 0.746 | | 0.5096 | 9.4 | 1400 | 0.5097 | 0.7499 | 0.75 | | 0.5084 | 10.74 | 1600 | 0.5063 | 0.7479 | 0.748 | | 0.5065 | 12.08 | 1800 | 0.5223 | 0.7481 | 0.749 | | 0.5047 | 13.42 | 2000 | 0.5103 | 0.7469 | 0.747 | | 0.5033 | 14.77 | 2200 | 0.5049 | 0.7520 | 0.753 | | 0.4965 | 16.11 | 2400 | 0.5122 | 0.7526 | 0.753 | | 0.4974 | 17.45 | 2600 | 0.5050 | 0.7537 | 0.754 | | 0.4947 | 18.79 | 2800 | 0.5027 | 0.7478 | 0.748 | | 0.4909 | 20.13 | 3000 | 0.5053 | 0.7460 | 0.746 | | 0.4918 | 21.48 | 3200 | 0.5123 | 0.7519 | 0.752 | | 0.4903 | 22.82 | 3400 | 0.5071 | 0.7530 | 0.753 | | 0.4871 | 24.16 | 3600 | 0.5038 | 0.7456 | 0.746 | | 0.4821 | 25.5 | 3800 | 0.5072 | 0.7488 | 0.749 | | 0.4891 | 26.85 | 4000 | 0.5063 | 0.7511 | 0.752 | | 0.4854 | 28.19 | 4200 | 0.5053 | 0.7549 | 0.755 | | 0.4827 | 29.53 | 4400 | 0.5108 | 0.7490 | 0.749 | | 0.4823 | 30.87 | 4600 | 0.5077 | 0.7530 | 0.753 | | 0.4827 | 32.21 | 4800 | 0.5076 | 0.7487 | 0.749 | | 0.4797 | 33.56 | 5000 | 0.5107 | 0.7558 | 0.756 | | 0.4823 | 34.9 | 5200 | 0.5074 | 0.7550 | 0.755 | | 0.4765 | 36.24 | 5400 | 0.5067 | 0.7527 | 0.753 | | 0.481 | 37.58 | 5600 | 0.5042 | 0.7580 | 0.758 | | 0.4767 | 38.93 | 5800 | 0.5042 | 0.7559 | 0.756 | | 0.4756 | 40.27 | 6000 | 0.5029 | 0.7576 | 0.758 | | 0.4763 | 41.61 | 6200 | 0.5035 | 0.7539 | 0.754 | | 0.4761 | 42.95 | 6400 | 0.5079 | 0.7570 | 0.757 | | 0.4737 | 44.3 | 6600 | 0.5080 | 0.7550 | 0.755 | | 0.4767 | 45.64 | 6800 | 0.5121 | 0.7598 | 0.76 | | 0.4739 | 46.98 | 7000 | 0.5067 | 0.7610 | 0.761 | | 0.474 | 48.32 | 7200 | 0.5092 | 0.7600 | 0.76 | | 0.4711 | 49.66 | 7400 | 0.5061 | 0.7610 | 0.761 | | 0.4719 | 51.01 | 7600 | 0.5073 | 0.7640 | 0.764 | | 0.4718 | 52.35 | 7800 | 0.5048 | 0.7528 | 0.753 | | 0.4708 | 53.69 | 8000 | 0.5038 | 0.7548 | 0.755 | | 0.4705 | 55.03 | 8200 | 0.5063 | 0.7610 | 0.761 | | 0.472 | 56.38 | 8400 | 0.5058 | 0.76 | 0.76 | | 0.4726 | 57.72 | 8600 | 0.5047 | 0.7549 | 0.755 | | 0.4675 | 59.06 | 8800 | 0.5055 | 0.7560 | 0.756 | | 0.4698 | 60.4 | 9000 | 0.5074 | 0.7620 | 0.762 | | 0.469 | 61.74 | 9200 | 0.5046 | 0.7580 | 0.758 | | 0.4726 | 63.09 | 9400 | 0.5054 | 0.7600 | 0.76 | | 0.4676 | 64.43 | 9600 | 0.5057 | 0.7560 | 0.756 | | 0.4726 | 65.77 | 9800 | 0.5063 | 0.7610 | 0.761 | | 0.4663 | 67.11 | 10000 | 0.5057 | 0.7570 | 0.757 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_2-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_2-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T06:17:13+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_tf\_2-seqsight\_8192\_512\_30M-L1\_f ========================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset. It achieves the following results on the evaluation set: * Loss: 0.4464 * F1 Score: 0.7959 * Accuracy: 0.796 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Boya1_RMSProp_1-e5_10Epoch_swinv2-tiny-patch4-window16-256_fold1 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window16-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window16-256) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0423 - Accuracy: 0.6478 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4592 | 1.0 | 924 | 1.4398 | 0.5180 | | 1.2561 | 2.0 | 1848 | 1.1998 | 0.5970 | | 1.555 | 3.0 | 2772 | 1.1434 | 0.6079 | | 1.1153 | 4.0 | 3696 | 1.0796 | 0.6366 | | 1.0327 | 5.0 | 4620 | 1.0669 | 0.6426 | | 0.8742 | 6.0 | 5544 | 1.0460 | 0.6453 | | 0.7982 | 7.0 | 6468 | 1.0642 | 0.6393 | | 0.8689 | 8.0 | 7392 | 1.0720 | 0.6396 | | 0.7857 | 9.0 | 8316 | 1.0542 | 0.6445 | | 0.7277 | 10.0 | 9240 | 1.0423 | 0.6478 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swinv2-tiny-patch4-window16-256", "model-index": [{"name": "Boya1_RMSProp_1-e5_10Epoch_swinv2-tiny-patch4-window16-256_fold1", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.6477611940298508, "name": "Accuracy"}]}]}]}
onizukal/Boya1_RMSProp_1-e5_10Epoch_swinv2-tiny-patch4-window16-256_fold1
null
[ "transformers", "safetensors", "swinv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swinv2-tiny-patch4-window16-256", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:17:16+00:00
[]
[]
TAGS #transformers #safetensors #swinv2 #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swinv2-tiny-patch4-window16-256 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
Boya1\_RMSProp\_1-e5\_10Epoch\_swinv2-tiny-patch4-window16-256\_fold1 ===================================================================== This model is a fine-tuned version of microsoft/swinv2-tiny-patch4-window16-256 on the imagefolder dataset. It achieves the following results on the evaluation set: * Loss: 1.0423 * Accuracy: 0.6478 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.35.0 * Pytorch 2.1.0 * Datasets 2.14.6 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.35.0\n* Pytorch 2.1.0\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ "TAGS\n#transformers #safetensors #swinv2 #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swinv2-tiny-patch4-window16-256 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.35.0\n* Pytorch 2.1.0\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cmpktheo/gemma-2b-ft-G2E
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:18:02+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
zandfj/LLaMA2-7B-Chat_sft_moren_dpo_z_moren_042713
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:20:03+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NDD-ppma_test-content_tags This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3296 - Accuracy: 0.7930 - F1: 0.8297 - Precision: 0.9284 - Recall: 0.7930 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.162 | 0.9990 | 722 | 0.3935 | 0.7930 | 0.8297 | 0.9284 | 0.7930 | | 0.1176 | 1.9979 | 1444 | 0.3296 | 0.7930 | 0.8297 | 0.9284 | 0.7930 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "NDD-ppma_test-content_tags", "results": []}]}
lgk03/NDD-ppma_test-content_tags
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:21:10+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
NDD-ppma\_test-content\_tags ============================ This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.3296 * Accuracy: 0.7930 * F1: 0.8297 * Precision: 0.9284 * Recall: 0.7930 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_2-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.4597 - F1 Score: 0.7837 - Accuracy: 0.784 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5749 | 1.34 | 200 | 0.5307 | 0.7430 | 0.744 | | 0.5292 | 2.68 | 400 | 0.5246 | 0.7435 | 0.744 | | 0.5144 | 4.03 | 600 | 0.5179 | 0.7526 | 0.753 | | 0.5057 | 5.37 | 800 | 0.5118 | 0.7549 | 0.755 | | 0.5003 | 6.71 | 1000 | 0.5201 | 0.7573 | 0.758 | | 0.494 | 8.05 | 1200 | 0.5002 | 0.7520 | 0.752 | | 0.4871 | 9.4 | 1400 | 0.5042 | 0.7510 | 0.751 | | 0.4837 | 10.74 | 1600 | 0.5001 | 0.7520 | 0.752 | | 0.4818 | 12.08 | 1800 | 0.5124 | 0.7566 | 0.757 | | 0.4764 | 13.42 | 2000 | 0.5040 | 0.7559 | 0.756 | | 0.476 | 14.77 | 2200 | 0.5011 | 0.7326 | 0.734 | | 0.4666 | 16.11 | 2400 | 0.5087 | 0.7499 | 0.75 | | 0.4677 | 17.45 | 2600 | 0.4994 | 0.7336 | 0.734 | | 0.4619 | 18.79 | 2800 | 0.4987 | 0.7365 | 0.737 | | 0.4563 | 20.13 | 3000 | 0.5070 | 0.7400 | 0.74 | | 0.4577 | 21.48 | 3200 | 0.5136 | 0.7447 | 0.745 | | 0.4532 | 22.82 | 3400 | 0.5117 | 0.7410 | 0.741 | | 0.4501 | 24.16 | 3600 | 0.5011 | 0.7350 | 0.735 | | 0.443 | 25.5 | 3800 | 0.5074 | 0.7470 | 0.747 | | 0.4472 | 26.85 | 4000 | 0.4981 | 0.7440 | 0.744 | | 0.4446 | 28.19 | 4200 | 0.5097 | 0.7429 | 0.743 | | 0.4392 | 29.53 | 4400 | 0.5106 | 0.7368 | 0.737 | | 0.4349 | 30.87 | 4600 | 0.5061 | 0.7360 | 0.736 | | 0.4352 | 32.21 | 4800 | 0.5051 | 0.7360 | 0.736 | | 0.4317 | 33.56 | 5000 | 0.5136 | 0.7449 | 0.745 | | 0.4318 | 34.9 | 5200 | 0.5131 | 0.7470 | 0.747 | | 0.4255 | 36.24 | 5400 | 0.5059 | 0.7418 | 0.742 | | 0.428 | 37.58 | 5600 | 0.5116 | 0.7419 | 0.742 | | 0.4222 | 38.93 | 5800 | 0.5093 | 0.7369 | 0.737 | | 0.4214 | 40.27 | 6000 | 0.5080 | 0.7368 | 0.737 | | 0.4193 | 41.61 | 6200 | 0.5054 | 0.7447 | 0.745 | | 0.4191 | 42.95 | 6400 | 0.5093 | 0.7500 | 0.75 | | 0.4156 | 44.3 | 6600 | 0.5104 | 0.7439 | 0.744 | | 0.4172 | 45.64 | 6800 | 0.5234 | 0.7535 | 0.754 | | 0.4129 | 46.98 | 7000 | 0.5135 | 0.7529 | 0.753 | | 0.4132 | 48.32 | 7200 | 0.5147 | 0.7530 | 0.753 | | 0.4106 | 49.66 | 7400 | 0.5118 | 0.7560 | 0.756 | | 0.4103 | 51.01 | 7600 | 0.5158 | 0.7510 | 0.751 | | 0.4057 | 52.35 | 7800 | 0.5146 | 0.7448 | 0.745 | | 0.4064 | 53.69 | 8000 | 0.5110 | 0.7459 | 0.746 | | 0.4078 | 55.03 | 8200 | 0.5133 | 0.7470 | 0.747 | | 0.4054 | 56.38 | 8400 | 0.5162 | 0.7530 | 0.753 | | 0.4048 | 57.72 | 8600 | 0.5126 | 0.7480 | 0.748 | | 0.4 | 59.06 | 8800 | 0.5131 | 0.7500 | 0.75 | | 0.4016 | 60.4 | 9000 | 0.5184 | 0.7490 | 0.749 | | 0.4004 | 61.74 | 9200 | 0.5147 | 0.7470 | 0.747 | | 0.4038 | 63.09 | 9400 | 0.5179 | 0.7490 | 0.749 | | 0.3989 | 64.43 | 9600 | 0.5157 | 0.7470 | 0.747 | | 0.4009 | 65.77 | 9800 | 0.5170 | 0.7500 | 0.75 | | 0.3977 | 67.11 | 10000 | 0.5158 | 0.7500 | 0.75 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_2-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_2-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T06:22:16+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_tf\_2-seqsight\_8192\_512\_30M-L8\_f ========================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset. It achieves the following results on the evaluation set: * Loss: 0.4597 * F1 Score: 0.7837 * Accuracy: 0.784 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/adapter-llama-adapterhappy2sad-study-50-0.009
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:23:30+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_2-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.4885 - F1 Score: 0.7820 - Accuracy: 0.782 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5649 | 1.34 | 200 | 0.5222 | 0.7425 | 0.744 | | 0.5198 | 2.68 | 400 | 0.5167 | 0.7509 | 0.752 | | 0.5037 | 4.03 | 600 | 0.5180 | 0.7536 | 0.755 | | 0.4921 | 5.37 | 800 | 0.5142 | 0.7540 | 0.755 | | 0.4849 | 6.71 | 1000 | 0.5136 | 0.7554 | 0.756 | | 0.4747 | 8.05 | 1200 | 0.4985 | 0.7513 | 0.752 | | 0.4639 | 9.4 | 1400 | 0.5182 | 0.7415 | 0.742 | | 0.4579 | 10.74 | 1600 | 0.5201 | 0.7453 | 0.746 | | 0.4514 | 12.08 | 1800 | 0.5115 | 0.7490 | 0.749 | | 0.4421 | 13.42 | 2000 | 0.5250 | 0.7394 | 0.74 | | 0.4354 | 14.77 | 2200 | 0.5126 | 0.7477 | 0.748 | | 0.4234 | 16.11 | 2400 | 0.5338 | 0.7438 | 0.744 | | 0.4153 | 17.45 | 2600 | 0.5287 | 0.7510 | 0.751 | | 0.4061 | 18.79 | 2800 | 0.5258 | 0.7430 | 0.743 | | 0.3981 | 20.13 | 3000 | 0.5438 | 0.7620 | 0.762 | | 0.3902 | 21.48 | 3200 | 0.5514 | 0.7394 | 0.74 | | 0.383 | 22.82 | 3400 | 0.5512 | 0.7478 | 0.748 | | 0.3701 | 24.16 | 3600 | 0.5570 | 0.7279 | 0.728 | | 0.3634 | 25.5 | 3800 | 0.5536 | 0.7439 | 0.744 | | 0.3577 | 26.85 | 4000 | 0.5462 | 0.7460 | 0.746 | | 0.3516 | 28.19 | 4200 | 0.5881 | 0.7377 | 0.738 | | 0.3421 | 29.53 | 4400 | 0.6056 | 0.7303 | 0.731 | | 0.3324 | 30.87 | 4600 | 0.5947 | 0.7438 | 0.744 | | 0.3313 | 32.21 | 4800 | 0.5837 | 0.7400 | 0.74 | | 0.3203 | 33.56 | 5000 | 0.6170 | 0.7379 | 0.738 | | 0.3184 | 34.9 | 5200 | 0.6058 | 0.7290 | 0.729 | | 0.3133 | 36.24 | 5400 | 0.5874 | 0.7400 | 0.74 | | 0.3059 | 37.58 | 5600 | 0.6140 | 0.7398 | 0.74 | | 0.3015 | 38.93 | 5800 | 0.6045 | 0.7309 | 0.731 | | 0.296 | 40.27 | 6000 | 0.6256 | 0.7308 | 0.731 | | 0.293 | 41.61 | 6200 | 0.6169 | 0.7249 | 0.725 | | 0.2827 | 42.95 | 6400 | 0.6515 | 0.7380 | 0.738 | | 0.2781 | 44.3 | 6600 | 0.6570 | 0.7299 | 0.73 | | 0.2796 | 45.64 | 6800 | 0.6887 | 0.7287 | 0.729 | | 0.2751 | 46.98 | 7000 | 0.6530 | 0.7289 | 0.729 | | 0.2708 | 48.32 | 7200 | 0.6750 | 0.7290 | 0.729 | | 0.2673 | 49.66 | 7400 | 0.6700 | 0.7288 | 0.729 | | 0.2631 | 51.01 | 7600 | 0.6750 | 0.73 | 0.73 | | 0.2541 | 52.35 | 7800 | 0.6998 | 0.7340 | 0.734 | | 0.2572 | 53.69 | 8000 | 0.6742 | 0.7370 | 0.737 | | 0.2539 | 55.03 | 8200 | 0.6811 | 0.7390 | 0.739 | | 0.251 | 56.38 | 8400 | 0.6732 | 0.7369 | 0.737 | | 0.2468 | 57.72 | 8600 | 0.7015 | 0.7320 | 0.732 | | 0.2459 | 59.06 | 8800 | 0.6816 | 0.7340 | 0.734 | | 0.245 | 60.4 | 9000 | 0.7022 | 0.7339 | 0.734 | | 0.2397 | 61.74 | 9200 | 0.7028 | 0.7289 | 0.729 | | 0.2396 | 63.09 | 9400 | 0.7151 | 0.7298 | 0.73 | | 0.2366 | 64.43 | 9600 | 0.7071 | 0.7330 | 0.733 | | 0.2438 | 65.77 | 9800 | 0.7062 | 0.7309 | 0.731 | | 0.2363 | 67.11 | 10000 | 0.7061 | 0.7319 | 0.732 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_2-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_tf_2-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T06:24:42+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_tf\_2-seqsight\_8192\_512\_30M-L32\_f ========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset. It achieves the following results on the evaluation set: * Loss: 0.4885 * F1 Score: 0.7820 * Accuracy: 0.782 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Qwen1.5-110B-Chat-AWQ ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 9 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B, 72B, and 110B dense models, and an MoE model of 14B with 2.7B activated; * Significant performance improvement in human preference for chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). <br> ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B and 110B) and the mixture of SWA and full attention. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen1.5-110B-Chat-AWQ", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-110B-Chat-AWQ") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Tips * If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ```
{"language": ["en"], "license": "other", "tags": ["chat"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/Qwen1.5-110B-Chat-AWQ/blob/main/LICENSE", "pipeline_tag": "text-generation"}
Qwen/Qwen1.5-110B-Chat-AWQ
null
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-27T06:25:13+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #qwen2 #text-generation #chat #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
# Qwen1.5-110B-Chat-AWQ ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 9 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B, 72B, and 110B dense models, and an MoE model of 14B with 2.7B activated; * Significant performance improvement in human preference for chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of 'trust_remote_code'. For more details, please refer to our blog post and GitHub repo. <br> ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B and 110B) and the mixture of SWA and full attention. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install 'transformers>=4.37.0', or you might encounter the following error: ## Quickstart Here provides a code snippet with 'apply_chat_template' to show you how to load the tokenizer and model and how to generate contents. ## Tips * If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in 'generation_config.json'. If you find our work helpful, feel free to give us a cite.
[ "# Qwen1.5-110B-Chat-AWQ", "## Introduction\n\nQwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: \n\n* 9 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B, 72B, and 110B dense models, and an MoE model of 14B with 2.7B activated;\n* Significant performance improvement in human preference for chat models;\n* Multilingual support of both base and chat models;\n* Stable support of 32K context length for models of all sizes\n* No need of 'trust_remote_code'.\n\nFor more details, please refer to our blog post and GitHub repo.\n<br>", "## Model Details\nQwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B and 110B) and the mixture of SWA and full attention.", "## Training details\nWe pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.", "## Requirements\nThe code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install 'transformers>=4.37.0', or you might encounter the following error:", "## Quickstart\n\nHere provides a code snippet with 'apply_chat_template' to show you how to load the tokenizer and model and how to generate contents.", "## Tips\n\n* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in 'generation_config.json'.\n\n\nIf you find our work helpful, feel free to give us a cite." ]
[ "TAGS\n#transformers #safetensors #qwen2 #text-generation #chat #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Qwen1.5-110B-Chat-AWQ", "## Introduction\n\nQwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: \n\n* 9 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B, 72B, and 110B dense models, and an MoE model of 14B with 2.7B activated;\n* Significant performance improvement in human preference for chat models;\n* Multilingual support of both base and chat models;\n* Stable support of 32K context length for models of all sizes\n* No need of 'trust_remote_code'.\n\nFor more details, please refer to our blog post and GitHub repo.\n<br>", "## Model Details\nQwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B and 110B) and the mixture of SWA and full attention.", "## Training details\nWe pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.", "## Requirements\nThe code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install 'transformers>=4.37.0', or you might encounter the following error:", "## Quickstart\n\nHere provides a code snippet with 'apply_chat_template' to show you how to load the tokenizer and model and how to generate contents.", "## Tips\n\n* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in 'generation_config.json'.\n\n\nIf you find our work helpful, feel free to give us a cite." ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
swj0419/bbc_retrain_new_STEP0000100
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:25:41+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_virus_covid-seqsight_8192_512_30M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset. It achieves the following results on the evaluation set: - Loss: 1.2289 - F1 Score: 0.5468 - Accuracy: 0.5496 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 2.1855 | 0.35 | 200 | 2.1850 | 0.0777 | 0.1281 | | 2.1796 | 0.7 | 400 | 2.1773 | 0.0786 | 0.1305 | | 2.1694 | 1.05 | 600 | 2.1674 | 0.1004 | 0.1358 | | 2.1535 | 1.4 | 800 | 2.1291 | 0.1234 | 0.1776 | | 2.1097 | 1.75 | 1000 | 2.0424 | 0.1789 | 0.2265 | | 2.031 | 2.09 | 1200 | 1.9501 | 0.2330 | 0.2619 | | 1.9484 | 2.44 | 1400 | 1.8227 | 0.2932 | 0.3053 | | 1.876 | 2.79 | 1600 | 1.7586 | 0.3244 | 0.3364 | | 1.8259 | 3.14 | 1800 | 1.7019 | 0.3421 | 0.3559 | | 1.7812 | 3.49 | 2000 | 1.6691 | 0.3666 | 0.3740 | | 1.754 | 3.84 | 2200 | 1.6100 | 0.3945 | 0.4045 | | 1.7057 | 4.19 | 2400 | 1.5670 | 0.4220 | 0.4194 | | 1.6663 | 4.54 | 2600 | 1.5210 | 0.4305 | 0.4334 | | 1.6469 | 4.89 | 2800 | 1.5190 | 0.4305 | 0.4318 | | 1.6263 | 5.24 | 3000 | 1.4904 | 0.4349 | 0.4422 | | 1.6046 | 5.58 | 3200 | 1.4649 | 0.4517 | 0.4554 | | 1.5793 | 5.93 | 3400 | 1.4500 | 0.4442 | 0.4518 | | 1.5689 | 6.28 | 3600 | 1.4389 | 0.4618 | 0.4596 | | 1.5559 | 6.63 | 3800 | 1.4115 | 0.4620 | 0.4696 | | 1.5339 | 6.98 | 4000 | 1.3988 | 0.4715 | 0.4851 | | 1.5257 | 7.33 | 4200 | 1.3822 | 0.4841 | 0.4923 | | 1.5065 | 7.68 | 4400 | 1.3691 | 0.4873 | 0.4920 | | 1.4975 | 8.03 | 4600 | 1.3517 | 0.4955 | 0.5023 | | 1.4805 | 8.38 | 4800 | 1.3445 | 0.4912 | 0.4993 | | 1.4796 | 8.73 | 5000 | 1.3267 | 0.5133 | 0.5179 | | 1.4511 | 9.08 | 5200 | 1.3267 | 0.5066 | 0.5062 | | 1.4485 | 9.42 | 5400 | 1.3009 | 0.5179 | 0.5251 | | 1.4423 | 9.77 | 5600 | 1.2948 | 0.5202 | 0.5275 | | 1.4405 | 10.12 | 5800 | 1.2897 | 0.5204 | 0.5236 | | 1.4335 | 10.47 | 6000 | 1.2751 | 0.5303 | 0.5329 | | 1.4257 | 10.82 | 6200 | 1.2725 | 0.5306 | 0.5333 | | 1.3988 | 11.17 | 6400 | 1.2673 | 0.5330 | 0.5350 | | 1.4113 | 11.52 | 6600 | 1.2662 | 0.5356 | 0.5357 | | 1.4073 | 11.87 | 6800 | 1.2548 | 0.5383 | 0.5384 | | 1.4015 | 12.22 | 7000 | 1.2573 | 0.5343 | 0.5373 | | 1.3847 | 12.57 | 7200 | 1.2444 | 0.5417 | 0.5445 | | 1.3905 | 12.91 | 7400 | 1.2465 | 0.5384 | 0.5398 | | 1.3904 | 13.26 | 7600 | 1.2347 | 0.5432 | 0.5434 | | 1.3764 | 13.61 | 7800 | 1.2385 | 0.5463 | 0.5444 | | 1.3763 | 13.96 | 8000 | 1.2293 | 0.5449 | 0.5466 | | 1.3708 | 14.31 | 8200 | 1.2276 | 0.5451 | 0.5481 | | 1.3686 | 14.66 | 8400 | 1.2254 | 0.5482 | 0.5480 | | 1.3699 | 15.01 | 8600 | 1.2273 | 0.5449 | 0.5508 | | 1.3725 | 15.36 | 8800 | 1.2182 | 0.5528 | 0.5539 | | 1.3484 | 15.71 | 9000 | 1.2193 | 0.5482 | 0.5516 | | 1.3594 | 16.06 | 9200 | 1.2163 | 0.5486 | 0.5514 | | 1.3608 | 16.4 | 9400 | 1.2147 | 0.5478 | 0.5516 | | 1.3575 | 16.75 | 9600 | 1.2145 | 0.5505 | 0.5527 | | 1.3553 | 17.1 | 9800 | 1.2140 | 0.5500 | 0.5525 | | 1.358 | 17.45 | 10000 | 1.2140 | 0.5519 | 0.5549 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_virus_covid-seqsight_8192_512_30M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_virus_covid-seqsight_8192_512_30M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T06:26:29+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_virus\_covid-seqsight\_8192\_512\_30M-L8\_f ================================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset. It achieves the following results on the evaluation set: * Loss: 1.2289 * F1 Score: 0.5468 * Accuracy: 0.5496 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_virus_covid-seqsight_8192_512_30M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset. It achieves the following results on the evaluation set: - Loss: 1.6492 - F1 Score: 0.3916 - Accuracy: 0.3854 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 2.1856 | 0.35 | 200 | 2.1860 | 0.0460 | 0.1191 | | 2.1822 | 0.7 | 400 | 2.1840 | 0.0599 | 0.1231 | | 2.1777 | 1.05 | 600 | 2.1814 | 0.0780 | 0.1308 | | 2.1743 | 1.4 | 800 | 2.1715 | 0.0743 | 0.1401 | | 2.167 | 1.75 | 1000 | 2.1623 | 0.0936 | 0.1480 | | 2.1605 | 2.09 | 1200 | 2.1612 | 0.1058 | 0.1560 | | 2.1515 | 2.44 | 1400 | 2.1490 | 0.1463 | 0.1608 | | 2.1381 | 2.79 | 1600 | 2.1187 | 0.1520 | 0.1872 | | 2.1201 | 3.14 | 1800 | 2.1000 | 0.1576 | 0.2019 | | 2.1026 | 3.49 | 2000 | 2.0667 | 0.1722 | 0.2203 | | 2.0866 | 3.84 | 2200 | 2.0280 | 0.2038 | 0.2406 | | 2.0574 | 4.19 | 2400 | 1.9980 | 0.2195 | 0.2421 | | 2.0332 | 4.54 | 2600 | 1.9691 | 0.2224 | 0.2566 | | 2.0145 | 4.89 | 2800 | 1.9464 | 0.2542 | 0.2722 | | 1.9963 | 5.24 | 3000 | 1.9197 | 0.2488 | 0.2815 | | 1.9832 | 5.58 | 3200 | 1.8955 | 0.2638 | 0.2917 | | 1.9536 | 5.93 | 3400 | 1.8678 | 0.2993 | 0.3152 | | 1.9413 | 6.28 | 3600 | 1.8402 | 0.3140 | 0.3217 | | 1.9241 | 6.63 | 3800 | 1.8249 | 0.3058 | 0.3198 | | 1.9091 | 6.98 | 4000 | 1.7995 | 0.3194 | 0.3322 | | 1.897 | 7.33 | 4200 | 1.7836 | 0.3233 | 0.3352 | | 1.8756 | 7.68 | 4400 | 1.7592 | 0.3454 | 0.3498 | | 1.8677 | 8.03 | 4600 | 1.7630 | 0.3215 | 0.3314 | | 1.856 | 8.38 | 4800 | 1.7384 | 0.3302 | 0.3465 | | 1.8508 | 8.73 | 5000 | 1.7255 | 0.3445 | 0.3526 | | 1.8347 | 9.08 | 5200 | 1.7255 | 0.3522 | 0.3518 | | 1.8283 | 9.42 | 5400 | 1.7108 | 0.3478 | 0.3608 | | 1.8247 | 9.77 | 5600 | 1.7034 | 0.3530 | 0.3613 | | 1.8133 | 10.12 | 5800 | 1.6961 | 0.3608 | 0.3680 | | 1.8155 | 10.47 | 6000 | 1.6899 | 0.3659 | 0.3654 | | 1.8112 | 10.82 | 6200 | 1.6830 | 0.3615 | 0.3646 | | 1.7961 | 11.17 | 6400 | 1.6881 | 0.3563 | 0.3582 | | 1.7989 | 11.52 | 6600 | 1.6829 | 0.3712 | 0.3691 | | 1.7956 | 11.87 | 6800 | 1.6736 | 0.3713 | 0.3728 | | 1.7853 | 12.22 | 7000 | 1.6661 | 0.3705 | 0.3707 | | 1.7802 | 12.57 | 7200 | 1.6657 | 0.3784 | 0.3768 | | 1.7843 | 12.91 | 7400 | 1.6640 | 0.3764 | 0.3782 | | 1.7861 | 13.26 | 7600 | 1.6617 | 0.3813 | 0.3799 | | 1.7732 | 13.61 | 7800 | 1.6594 | 0.3840 | 0.3787 | | 1.7761 | 13.96 | 8000 | 1.6559 | 0.3790 | 0.3755 | | 1.7699 | 14.31 | 8200 | 1.6545 | 0.3815 | 0.3833 | | 1.7722 | 14.66 | 8400 | 1.6481 | 0.3865 | 0.3846 | | 1.7709 | 15.01 | 8600 | 1.6509 | 0.3806 | 0.3818 | | 1.7755 | 15.36 | 8800 | 1.6469 | 0.3876 | 0.3833 | | 1.7549 | 15.71 | 9000 | 1.6479 | 0.3843 | 0.3838 | | 1.7576 | 16.06 | 9200 | 1.6445 | 0.3873 | 0.3848 | | 1.7721 | 16.4 | 9400 | 1.6436 | 0.3875 | 0.3871 | | 1.7559 | 16.75 | 9600 | 1.6441 | 0.3861 | 0.3840 | | 1.7599 | 17.1 | 9800 | 1.6441 | 0.3872 | 0.3864 | | 1.765 | 17.45 | 10000 | 1.6439 | 0.3874 | 0.3864 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_virus_covid-seqsight_8192_512_30M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_virus_covid-seqsight_8192_512_30M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T06:26:29+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_virus\_covid-seqsight\_8192\_512\_30M-L1\_f ================================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset. It achieves the following results on the evaluation set: * Loss: 1.6492 * F1 Score: 0.3916 * Accuracy: 0.3854 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_tata-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset. It achieves the following results on the evaluation set: - Loss: 0.4922 - F1 Score: 0.8009 - Accuracy: 0.8010 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6111 | 5.13 | 200 | 0.5537 | 0.7272 | 0.7325 | | 0.5084 | 10.26 | 400 | 0.5295 | 0.7550 | 0.7553 | | 0.4743 | 15.38 | 600 | 0.5202 | 0.7415 | 0.7423 | | 0.4584 | 20.51 | 800 | 0.4863 | 0.7668 | 0.7667 | | 0.4425 | 25.64 | 1000 | 0.4875 | 0.7684 | 0.7684 | | 0.4327 | 30.77 | 1200 | 0.4837 | 0.7747 | 0.7749 | | 0.4214 | 35.9 | 1400 | 0.4651 | 0.7929 | 0.7928 | | 0.4151 | 41.03 | 1600 | 0.4625 | 0.7978 | 0.7977 | | 0.4071 | 46.15 | 1800 | 0.4724 | 0.7859 | 0.7863 | | 0.4005 | 51.28 | 2000 | 0.4603 | 0.7879 | 0.7879 | | 0.3943 | 56.41 | 2200 | 0.4533 | 0.7946 | 0.7945 | | 0.3896 | 61.54 | 2400 | 0.4716 | 0.7875 | 0.7879 | | 0.3821 | 66.67 | 2600 | 0.4654 | 0.8025 | 0.8026 | | 0.3798 | 71.79 | 2800 | 0.4583 | 0.8043 | 0.8042 | | 0.3754 | 76.92 | 3000 | 0.4688 | 0.8103 | 0.8108 | | 0.3718 | 82.05 | 3200 | 0.4531 | 0.8092 | 0.8091 | | 0.3685 | 87.18 | 3400 | 0.4774 | 0.8036 | 0.8042 | | 0.366 | 92.31 | 3600 | 0.4550 | 0.8124 | 0.8124 | | 0.3609 | 97.44 | 3800 | 0.4492 | 0.8173 | 0.8173 | | 0.3546 | 102.56 | 4000 | 0.4583 | 0.8174 | 0.8173 | | 0.3538 | 107.69 | 4200 | 0.4712 | 0.8105 | 0.8108 | | 0.3495 | 112.82 | 4400 | 0.4596 | 0.8223 | 0.8222 | | 0.3476 | 117.95 | 4600 | 0.4492 | 0.8223 | 0.8222 | | 0.3417 | 123.08 | 4800 | 0.4569 | 0.8174 | 0.8173 | | 0.343 | 128.21 | 5000 | 0.4498 | 0.8207 | 0.8206 | | 0.3413 | 133.33 | 5200 | 0.4471 | 0.8223 | 0.8222 | | 0.3361 | 138.46 | 5400 | 0.4447 | 0.8239 | 0.8238 | | 0.3351 | 143.59 | 5600 | 0.4510 | 0.8239 | 0.8238 | | 0.331 | 148.72 | 5800 | 0.4490 | 0.8223 | 0.8222 | | 0.3257 | 153.85 | 6000 | 0.4513 | 0.8256 | 0.8254 | | 0.3248 | 158.97 | 6200 | 0.4563 | 0.8256 | 0.8254 | | 0.3277 | 164.1 | 6400 | 0.4537 | 0.8239 | 0.8238 | | 0.3237 | 169.23 | 6600 | 0.4527 | 0.8207 | 0.8206 | | 0.3262 | 174.36 | 6800 | 0.4558 | 0.8190 | 0.8189 | | 0.3174 | 179.49 | 7000 | 0.4537 | 0.8207 | 0.8206 | | 0.3173 | 184.62 | 7200 | 0.4505 | 0.8222 | 0.8222 | | 0.3155 | 189.74 | 7400 | 0.4557 | 0.8223 | 0.8222 | | 0.3122 | 194.87 | 7600 | 0.4555 | 0.8223 | 0.8222 | | 0.3162 | 200.0 | 7800 | 0.4558 | 0.8191 | 0.8189 | | 0.3153 | 205.13 | 8000 | 0.4537 | 0.8256 | 0.8254 | | 0.3071 | 210.26 | 8200 | 0.4576 | 0.8239 | 0.8238 | | 0.3123 | 215.38 | 8400 | 0.4560 | 0.8256 | 0.8254 | | 0.3053 | 220.51 | 8600 | 0.4578 | 0.8223 | 0.8222 | | 0.3072 | 225.64 | 8800 | 0.4606 | 0.8256 | 0.8254 | | 0.3081 | 230.77 | 9000 | 0.4583 | 0.8239 | 0.8238 | | 0.3066 | 235.9 | 9200 | 0.4589 | 0.8239 | 0.8238 | | 0.306 | 241.03 | 9400 | 0.4593 | 0.8239 | 0.8238 | | 0.3068 | 246.15 | 9600 | 0.4602 | 0.8239 | 0.8238 | | 0.306 | 251.28 | 9800 | 0.4595 | 0.8239 | 0.8238 | | 0.3071 | 256.41 | 10000 | 0.4592 | 0.8256 | 0.8254 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T06:26:41+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_prom\_prom\_300\_tata-seqsight\_16384\_512\_22M-L1\_f ========================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_tata dataset. It achieves the following results on the evaluation set: * Loss: 0.4922 * F1 Score: 0.8009 * Accuracy: 0.8010 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_virus_covid-seqsight_8192_512_30M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset. It achieves the following results on the evaluation set: - Loss: 1.0241 - F1 Score: 0.6185 - Accuracy: 0.6155 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 2.1851 | 0.35 | 200 | 2.1813 | 0.0821 | 0.1275 | | 2.1775 | 0.7 | 400 | 2.1701 | 0.0978 | 0.1440 | | 2.1488 | 1.05 | 600 | 2.1058 | 0.1574 | 0.1842 | | 2.0473 | 1.4 | 800 | 1.9125 | 0.2244 | 0.2708 | | 1.8692 | 1.75 | 1000 | 1.7204 | 0.3289 | 0.3473 | | 1.7448 | 2.09 | 1200 | 1.6459 | 0.3590 | 0.3839 | | 1.6821 | 2.44 | 1400 | 1.5493 | 0.3897 | 0.4111 | | 1.6134 | 2.79 | 1600 | 1.5078 | 0.4089 | 0.4310 | | 1.5675 | 3.14 | 1800 | 1.4539 | 0.4367 | 0.4582 | | 1.5234 | 3.49 | 2000 | 1.4160 | 0.4521 | 0.4626 | | 1.4991 | 3.84 | 2200 | 1.3871 | 0.4674 | 0.4772 | | 1.4587 | 4.19 | 2400 | 1.3459 | 0.4940 | 0.4974 | | 1.4268 | 4.54 | 2600 | 1.3173 | 0.4931 | 0.5076 | | 1.4163 | 4.89 | 2800 | 1.2876 | 0.5040 | 0.5169 | | 1.3838 | 5.24 | 3000 | 1.2806 | 0.5090 | 0.5181 | | 1.3637 | 5.58 | 3200 | 1.2468 | 0.5258 | 0.5297 | | 1.3358 | 5.93 | 3400 | 1.2424 | 0.5215 | 0.5291 | | 1.3196 | 6.28 | 3600 | 1.2202 | 0.5368 | 0.5413 | | 1.3075 | 6.63 | 3800 | 1.1931 | 0.5407 | 0.5541 | | 1.2941 | 6.98 | 4000 | 1.1811 | 0.5410 | 0.5470 | | 1.2761 | 7.33 | 4200 | 1.1674 | 0.5603 | 0.5616 | | 1.263 | 7.68 | 4400 | 1.1502 | 0.5599 | 0.5655 | | 1.2595 | 8.03 | 4600 | 1.1492 | 0.5653 | 0.5681 | | 1.2293 | 8.38 | 4800 | 1.1303 | 0.5633 | 0.5715 | | 1.238 | 8.73 | 5000 | 1.1224 | 0.5725 | 0.5719 | | 1.2202 | 9.08 | 5200 | 1.1197 | 0.5782 | 0.5748 | | 1.2084 | 9.42 | 5400 | 1.1105 | 0.5813 | 0.5826 | | 1.2058 | 9.77 | 5600 | 1.0964 | 0.5816 | 0.5830 | | 1.1931 | 10.12 | 5800 | 1.0859 | 0.5912 | 0.5883 | | 1.1906 | 10.47 | 6000 | 1.0810 | 0.5909 | 0.5889 | | 1.1791 | 10.82 | 6200 | 1.0744 | 0.5976 | 0.5936 | | 1.1562 | 11.17 | 6400 | 1.0731 | 0.5945 | 0.5940 | | 1.1669 | 11.52 | 6600 | 1.0689 | 0.6019 | 0.5973 | | 1.1696 | 11.87 | 6800 | 1.0601 | 0.5996 | 0.5968 | | 1.1597 | 12.22 | 7000 | 1.0579 | 0.6047 | 0.6016 | | 1.1496 | 12.57 | 7200 | 1.0557 | 0.5999 | 0.5966 | | 1.1548 | 12.91 | 7400 | 1.0510 | 0.6041 | 0.6006 | | 1.1411 | 13.26 | 7600 | 1.0528 | 0.6037 | 0.5991 | | 1.1441 | 13.61 | 7800 | 1.0499 | 0.6110 | 0.6041 | | 1.1352 | 13.96 | 8000 | 1.0411 | 0.6079 | 0.6054 | | 1.1289 | 14.31 | 8200 | 1.0378 | 0.6108 | 0.6069 | | 1.1323 | 14.66 | 8400 | 1.0389 | 0.6059 | 0.6045 | | 1.129 | 15.01 | 8600 | 1.0371 | 0.6070 | 0.6050 | | 1.1341 | 15.36 | 8800 | 1.0289 | 0.6143 | 0.6102 | | 1.1156 | 15.71 | 9000 | 1.0308 | 0.6106 | 0.6069 | | 1.1211 | 16.06 | 9200 | 1.0270 | 0.6124 | 0.6082 | | 1.1208 | 16.4 | 9400 | 1.0282 | 0.6119 | 0.6077 | | 1.1166 | 16.75 | 9600 | 1.0263 | 0.6132 | 0.6070 | | 1.122 | 17.1 | 9800 | 1.0263 | 0.6110 | 0.6096 | | 1.1184 | 17.45 | 10000 | 1.0254 | 0.6118 | 0.6100 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_virus_covid-seqsight_8192_512_30M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_virus_covid-seqsight_8192_512_30M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_8192_512_30M", "region:us" ]
null
2024-04-27T06:26:42+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
GUE\_virus\_covid-seqsight\_8192\_512\_30M-L32\_f ================================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset. It achieves the following results on the evaluation set: * Loss: 1.0241 * F1 Score: 0.6185 * Accuracy: 0.6155 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_tata-seqsight_16384_512_22M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset. It achieves the following results on the evaluation set: - Loss: 0.4809 - F1 Score: 0.8026 - Accuracy: 0.8026 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5603 | 5.13 | 200 | 0.5109 | 0.7689 | 0.7700 | | 0.4608 | 10.26 | 400 | 0.4759 | 0.7863 | 0.7863 | | 0.4277 | 15.38 | 600 | 0.4688 | 0.7846 | 0.7847 | | 0.4068 | 20.51 | 800 | 0.4610 | 0.7913 | 0.7912 | | 0.3844 | 25.64 | 1000 | 0.4805 | 0.7975 | 0.7977 | | 0.3659 | 30.77 | 1200 | 0.4757 | 0.8141 | 0.8140 | | 0.3487 | 35.9 | 1400 | 0.4714 | 0.8157 | 0.8157 | | 0.3338 | 41.03 | 1600 | 0.4738 | 0.8239 | 0.8238 | | 0.3212 | 46.15 | 1800 | 0.4840 | 0.8158 | 0.8157 | | 0.3048 | 51.28 | 2000 | 0.4868 | 0.8189 | 0.8189 | | 0.2977 | 56.41 | 2200 | 0.5045 | 0.8125 | 0.8124 | | 0.2848 | 61.54 | 2400 | 0.5315 | 0.8092 | 0.8091 | | 0.2743 | 66.67 | 2600 | 0.5168 | 0.8190 | 0.8189 | | 0.267 | 71.79 | 2800 | 0.5303 | 0.8109 | 0.8108 | | 0.2593 | 76.92 | 3000 | 0.5355 | 0.8125 | 0.8124 | | 0.2459 | 82.05 | 3200 | 0.5562 | 0.8090 | 0.8091 | | 0.2479 | 87.18 | 3400 | 0.5495 | 0.8010 | 0.8010 | | 0.2395 | 92.31 | 3600 | 0.5365 | 0.8060 | 0.8059 | | 0.2284 | 97.44 | 3800 | 0.5581 | 0.8025 | 0.8026 | | 0.2217 | 102.56 | 4000 | 0.6187 | 0.7810 | 0.7814 | | 0.2173 | 107.69 | 4200 | 0.6077 | 0.7894 | 0.7896 | | 0.213 | 112.82 | 4400 | 0.5782 | 0.8042 | 0.8042 | | 0.2079 | 117.95 | 4600 | 0.5814 | 0.7946 | 0.7945 | | 0.2045 | 123.08 | 4800 | 0.5928 | 0.7962 | 0.7961 | | 0.1952 | 128.21 | 5000 | 0.6255 | 0.7974 | 0.7977 | | 0.1916 | 133.33 | 5200 | 0.6154 | 0.8011 | 0.8010 | | 0.1882 | 138.46 | 5400 | 0.6214 | 0.8011 | 0.8010 | | 0.1841 | 143.59 | 5600 | 0.6540 | 0.7992 | 0.7993 | | 0.1739 | 148.72 | 5800 | 0.6606 | 0.7995 | 0.7993 | | 0.1734 | 153.85 | 6000 | 0.6523 | 0.8044 | 0.8042 | | 0.1741 | 158.97 | 6200 | 0.6775 | 0.8043 | 0.8042 | | 0.171 | 164.1 | 6400 | 0.6521 | 0.8093 | 0.8091 | | 0.1666 | 169.23 | 6600 | 0.6671 | 0.8028 | 0.8026 | | 0.1672 | 174.36 | 6800 | 0.6838 | 0.8042 | 0.8042 | | 0.1629 | 179.49 | 7000 | 0.6794 | 0.7962 | 0.7961 | | 0.1623 | 184.62 | 7200 | 0.6745 | 0.7995 | 0.7993 | | 0.156 | 189.74 | 7400 | 0.7068 | 0.7930 | 0.7928 | | 0.1523 | 194.87 | 7600 | 0.7110 | 0.7946 | 0.7945 | | 0.1504 | 200.0 | 7800 | 0.7096 | 0.7962 | 0.7961 | | 0.1505 | 205.13 | 8000 | 0.7144 | 0.7929 | 0.7928 | | 0.1483 | 210.26 | 8200 | 0.7163 | 0.7962 | 0.7961 | | 0.1485 | 215.38 | 8400 | 0.7113 | 0.7897 | 0.7896 | | 0.1486 | 220.51 | 8600 | 0.7065 | 0.7930 | 0.7928 | | 0.148 | 225.64 | 8800 | 0.7195 | 0.7962 | 0.7961 | | 0.1472 | 230.77 | 9000 | 0.7241 | 0.7880 | 0.7879 | | 0.1439 | 235.9 | 9200 | 0.7255 | 0.7946 | 0.7945 | | 0.1436 | 241.03 | 9400 | 0.7192 | 0.7979 | 0.7977 | | 0.1448 | 246.15 | 9600 | 0.7189 | 0.7946 | 0.7945 | | 0.144 | 251.28 | 9800 | 0.7211 | 0.7929 | 0.7928 | | 0.144 | 256.41 | 10000 | 0.7181 | 0.7995 | 0.7993 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_16384_512_22M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_16384_512_22M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T06:27:16+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_prom\_prom\_300\_tata-seqsight\_16384\_512\_22M-L8\_f ========================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_tata dataset. It achieves the following results on the evaluation set: * Loss: 0.4809 * F1 Score: 0.8026 * Accuracy: 0.8026 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/dvr76d6
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:28:11+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
toan-ly/vinallama-peft-7b-chatbot
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:30:58+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) bloomz-7b1 - bnb 4bits - Model creator: https://huggingface.co/bigscience/ - Original model: https://huggingface.co/bigscience/bloomz-7b1/ Original model description: --- datasets: - bigscience/xP3 license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zu programming_language: - C - C++ - C# - Go - Java - JavaScript - Lua - PHP - Python - Ruby - Rust - Scala - TypeScript pipeline_tag: text-generation widget: - text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous review as positive, neutral or negative?" example_title: "zh-en sentiment" - text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?" example_title: "zh-zh sentiment" - text: "Suggest at least five related search terms to \"Mạng neural nhân tạo\"." example_title: "vi-en query" - text: "Proposez au moins cinq mots clés concernant «Réseau de neurones artificiels»." example_title: "fr-fr query" - text: "Explain in a sentence in Telugu what is backpropagation in neural networks." example_title: "te-en qa" - text: "Why is the sky blue?" example_title: "en-en qa" - text: "Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is \"Heroes Come in All Shapes and Sizes\". Story (in Spanish):" example_title: "es-en fable" - text: "Write a fable about wood elves living in a forest that is suddenly invaded by ogres. The fable is a masterpiece that has achieved praise worldwide and its moral is \"Violence is the last refuge of the incompetent\". Fable (in Hindi):" example_title: "hi-en fable" model-index: - name: bloomz-7b1 results: - task: type: Coreference resolution dataset: type: winogrande name: Winogrande XL (xl) config: xl split: validation revision: a80f460359d1e9a67c006011c94de42a8759430c metrics: - type: Accuracy value: 55.8 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (en) config: en split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 66.02 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (fr) config: fr split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 57.83 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (jp) config: jp split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 52.87 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (pt) config: pt split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 57.79 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (ru) config: ru split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 54.92 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (zh) config: zh split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 63.69 - task: type: Natural language inference dataset: type: anli name: ANLI (r1) config: r1 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 42.1 - task: type: Natural language inference dataset: type: anli name: ANLI (r2) config: r2 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 39.5 - task: type: Natural language inference dataset: type: anli name: ANLI (r3) config: r3 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 41.0 - task: type: Natural language inference dataset: type: super_glue name: SuperGLUE (cb) config: cb split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 80.36 - task: type: Natural language inference dataset: type: super_glue name: SuperGLUE (rte) config: rte split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 84.12 - task: type: Natural language inference dataset: type: xnli name: XNLI (ar) config: ar split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 53.25 - task: type: Natural language inference dataset: type: xnli name: XNLI (bg) config: bg split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 43.61 - task: type: Natural language inference dataset: type: xnli name: XNLI (de) config: de split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 46.83 - task: type: Natural language inference dataset: type: xnli name: XNLI (el) config: el split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 41.53 - task: type: Natural language inference dataset: type: xnli name: XNLI (en) config: en split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 59.68 - task: type: Natural language inference dataset: type: xnli name: XNLI (es) config: es split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 55.1 - task: type: Natural language inference dataset: type: xnli name: XNLI (fr) config: fr split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 55.26 - task: type: Natural language inference dataset: type: xnli name: XNLI (hi) config: hi split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 50.88 - task: type: Natural language inference dataset: type: xnli name: XNLI (ru) config: ru split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 47.75 - task: type: Natural language inference dataset: type: xnli name: XNLI (sw) config: sw split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 46.63 - task: type: Natural language inference dataset: type: xnli name: XNLI (th) config: th split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 40.12 - task: type: Natural language inference dataset: type: xnli name: XNLI (tr) config: tr split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 37.55 - task: type: Natural language inference dataset: type: xnli name: XNLI (ur) config: ur split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 46.51 - task: type: Natural language inference dataset: type: xnli name: XNLI (vi) config: vi split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 52.93 - task: type: Natural language inference dataset: type: xnli name: XNLI (zh) config: zh split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 53.61 - task: type: Program synthesis dataset: type: openai_humaneval name: HumanEval config: None split: test revision: e8dc562f5de170c54b5481011dd9f4fa04845771 metrics: - type: Pass@1 value: 8.06 - type: Pass@10 value: 15.03 - type: Pass@100 value: 27.49 - task: type: Sentence completion dataset: type: story_cloze name: StoryCloze (2016) config: "2016" split: validation revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db metrics: - type: Accuracy value: 90.43 - task: type: Sentence completion dataset: type: super_glue name: SuperGLUE (copa) config: copa split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 86.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (et) config: et split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 50.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (ht) config: ht split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 54.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (id) config: id split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 76.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (it) config: it split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 61.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (qu) config: qu split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 60.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (sw) config: sw split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 63.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (ta) config: ta split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 64.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (th) config: th split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 57.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (tr) config: tr split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 53.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (vi) config: vi split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 79.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (zh) config: zh split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 81.0 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (ar) config: ar split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 83.26 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (es) config: es split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 88.95 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (eu) config: eu split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 73.33 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (hi) config: hi split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 80.61 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (id) config: id split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 84.25 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (my) config: my split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 52.55 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (ru) config: ru split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 65.32 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (sw) config: sw split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 71.67 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (te) config: te split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 74.72 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (zh) config: zh split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 85.37 --- ![xmtf](https://github.com/bigscience-workshop/xmtf/blob/master/xmtf_banner.png?raw=true) # Table of Contents 1. [Model Summary](#model-summary) 2. [Use](#use) 3. [Limitations](#limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 7. [Citation](#citation) # Model Summary > We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages. - **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf) - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786) - **Point of Contact:** [Niklas Muennighoff](mailto:[email protected]) - **Languages:** Refer to [bloom](https://huggingface.co/bigscience/bloom) for pretraining & [xP3](https://huggingface.co/datasets/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages. - **BLOOMZ & mT0 Model Family:** <div class="max-w-full overflow-auto"> <table> <tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English. </tr> <tr> <td>Parameters</td> <td>300M</td> <td>580M</td> <td>1.2B</td> <td>3.7B</td> <td>13B</td> <td>560M</td> <td>1.1B</td> <td>1.7B</td> <td>3B</td> <td>7.1B</td> <td>176B</td> </tr> <tr> <td>Finetuned Model</td> <td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td> <td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td> <td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td> <td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td> </tr> </tr> <tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th> </tr> <tr> <td>Finetuned Model</td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td> </tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th> </tr> <tr> <td>Finetuned Model</td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td> </tr> <th colspan="12">Original pretrained checkpoints. Not recommended.</th> <tr> <td>Pretrained Model</td> <td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td> <td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td> <td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td> <td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td> <td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td> <td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td> <td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td> <td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td> <td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td> <td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td> <td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td> </tr> </table> </div> # Use ## Intended use We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper: - 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评? - Suggest at least five related search terms to "Mạng neural nhân tạo". - Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish): - Explain in a sentence in Telugu what is backpropagation in neural networks. **Feel free to share your generations in the Community tab!** ## How to use ### CPU <details> <summary> Click to expand </summary> ```python # pip install -q transformers from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigscience/bloomz-7b1" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint) inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> ### GPU <details> <summary> Click to expand </summary> ```python # pip install -q transformers accelerate from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigscience/bloomz-7b1" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto") inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> ### GPU in 8bit <details> <summary> Click to expand </summary> ```python # pip install -q transformers accelerate bitsandbytes from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigscience/bloomz-7b1" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True) inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> <!-- Necessary for whitespace --> ### # Limitations **Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*". # Training ## Model - **Architecture:** Same as [bloom-7b1](https://huggingface.co/bigscience/bloom-7b1), also refer to the `config.json` file - **Finetuning steps:** 1000 - **Finetuning tokens:** 4.19 billion - **Finetuning layout:** 1x pipeline parallel, 1x tensor parallel, 64x data parallel - **Precision:** float16 ## Hardware - **CPUs:** AMD CPUs with 512GB memory per node - **GPUs:** 64 A100 80GB GPUs with 8 GPUs per node (8 nodes) using NVLink 4 inter-gpu connects, 4 OmniPath links - **Communication:** NCCL-communications network with a fully dedicated subnet ## Software - **Orchestration:** [Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed) - **Optimizer & parallelism:** [DeepSpeed](https://github.com/microsoft/DeepSpeed) - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) (pytorch-1.11 w/ CUDA-11.5) - **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex) # Evaluation We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config. # Citation ```bibtex @article{muennighoff2022crosslingual, title={Crosslingual generalization through multitask finetuning}, author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others}, journal={arXiv preprint arXiv:2211.01786}, year={2022} } ```
{}
RichardErkhov/bigscience_-_bloomz-7b1-4bits
null
[ "transformers", "safetensors", "bloom", "text-generation", "arxiv:2211.01786", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-27T06:32:23+00:00
[ "2211.01786" ]
[]
TAGS #transformers #safetensors #bloom #text-generation #arxiv-2211.01786 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models bloomz-7b1 - bnb 4bits * Model creator: URL * Original model: URL Original model description: --------------------------- datasets: * bigscience/xP3 license: bigscience-bloom-rail-1.0 language: * ak * ar * as * bm * bn * ca * code * en * es * eu * fon * fr * gu * hi * id * ig * ki * kn * lg * ln * ml * mr * ne * nso * ny * or * pa * pt * rn * rw * sn * st * sw * ta * te * tn * ts * tum * tw * ur * vi * wo * xh * yo * zh * zu programming\_language: * C * C++ * C# * Go * Java * JavaScript * Lua * PHP * Python * Ruby * Rust * Scala * TypeScript pipeline\_tag: text-generation widget: * text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous review as positive, neutral or negative?" example\_title: "zh-en sentiment" * text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?" example\_title: "zh-zh sentiment" * text: "Suggest at least five related search terms to "Mạng neural nhân tạo"." example\_title: "vi-en query" * text: "Proposez au moins cinq mots clés concernant «Réseau de neurones artificiels»." example\_title: "fr-fr query" * text: "Explain in a sentence in Telugu what is backpropagation in neural networks." example\_title: "te-en qa" * text: "Why is the sky blue?" example\_title: "en-en qa" * text: "Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):" example\_title: "es-en fable" * text: "Write a fable about wood elves living in a forest that is suddenly invaded by ogres. The fable is a masterpiece that has achieved praise worldwide and its moral is "Violence is the last refuge of the incompetent". Fable (in Hindi):" example\_title: "hi-en fable" model-index: * name: bloomz-7b1 results: + task: type: Coreference resolution dataset: type: winogrande name: Winogrande XL (xl) config: xl split: validation revision: a80f460359d1e9a67c006011c94de42a8759430c metrics: - type: Accuracy value: 55.8 + task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (en) config: en split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 66.02 + task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (fr) config: fr split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 57.83 + task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (jp) config: jp split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 52.87 + task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (pt) config: pt split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 57.79 + task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (ru) config: ru split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 54.92 + task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (zh) config: zh split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 63.69 + task: type: Natural language inference dataset: type: anli name: ANLI (r1) config: r1 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 42.1 + task: type: Natural language inference dataset: type: anli name: ANLI (r2) config: r2 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 39.5 + task: type: Natural language inference dataset: type: anli name: ANLI (r3) config: r3 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 41.0 + task: type: Natural language inference dataset: type: super\_glue name: SuperGLUE (cb) config: cb split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 80.36 + task: type: Natural language inference dataset: type: super\_glue name: SuperGLUE (rte) config: rte split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 84.12 + task: type: Natural language inference dataset: type: xnli name: XNLI (ar) config: ar split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 53.25 + task: type: Natural language inference dataset: type: xnli name: XNLI (bg) config: bg split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 43.61 + task: type: Natural language inference dataset: type: xnli name: XNLI (de) config: de split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 46.83 + task: type: Natural language inference dataset: type: xnli name: XNLI (el) config: el split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 41.53 + task: type: Natural language inference dataset: type: xnli name: XNLI (en) config: en split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 59.68 + task: type: Natural language inference dataset: type: xnli name: XNLI (es) config: es split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 55.1 + task: type: Natural language inference dataset: type: xnli name: XNLI (fr) config: fr split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 55.26 + task: type: Natural language inference dataset: type: xnli name: XNLI (hi) config: hi split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 50.88 + task: type: Natural language inference dataset: type: xnli name: XNLI (ru) config: ru split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 47.75 + task: type: Natural language inference dataset: type: xnli name: XNLI (sw) config: sw split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 46.63 + task: type: Natural language inference dataset: type: xnli name: XNLI (th) config: th split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 40.12 + task: type: Natural language inference dataset: type: xnli name: XNLI (tr) config: tr split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 37.55 + task: type: Natural language inference dataset: type: xnli name: XNLI (ur) config: ur split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 46.51 + task: type: Natural language inference dataset: type: xnli name: XNLI (vi) config: vi split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 52.93 + task: type: Natural language inference dataset: type: xnli name: XNLI (zh) config: zh split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 53.61 + task: type: Program synthesis dataset: type: openai\_humaneval name: HumanEval config: None split: test revision: e8dc562f5de170c54b5481011dd9f4fa04845771 metrics: - type: Pass@1 value: 8.06 - type: Pass@10 value: 15.03 - type: Pass@100 value: 27.49 + task: type: Sentence completion dataset: type: story\_cloze name: StoryCloze (2016) config: "2016" split: validation revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db metrics: - type: Accuracy value: 90.43 + task: type: Sentence completion dataset: type: super\_glue name: SuperGLUE (copa) config: copa split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 86.0 + task: type: Sentence completion dataset: type: xcopa name: XCOPA (et) config: et split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 50.0 + task: type: Sentence completion dataset: type: xcopa name: XCOPA (ht) config: ht split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 54.0 + task: type: Sentence completion dataset: type: xcopa name: XCOPA (id) config: id split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 76.0 + task: type: Sentence completion dataset: type: xcopa name: XCOPA (it) config: it split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 61.0 + task: type: Sentence completion dataset: type: xcopa name: XCOPA (qu) config: qu split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 60.0 + task: type: Sentence completion dataset: type: xcopa name: XCOPA (sw) config: sw split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 63.0 + task: type: Sentence completion dataset: type: xcopa name: XCOPA (ta) config: ta split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 64.0 + task: type: Sentence completion dataset: type: xcopa name: XCOPA (th) config: th split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 57.0 + task: type: Sentence completion dataset: type: xcopa name: XCOPA (tr) config: tr split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 53.0 + task: type: Sentence completion dataset: type: xcopa name: XCOPA (vi) config: vi split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 79.0 + task: type: Sentence completion dataset: type: xcopa name: XCOPA (zh) config: zh split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 81.0 + task: type: Sentence completion dataset: type: Muennighoff/xstory\_cloze name: XStoryCloze (ar) config: ar split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 83.26 + task: type: Sentence completion dataset: type: Muennighoff/xstory\_cloze name: XStoryCloze (es) config: es split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 88.95 + task: type: Sentence completion dataset: type: Muennighoff/xstory\_cloze name: XStoryCloze (eu) config: eu split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 73.33 + task: type: Sentence completion dataset: type: Muennighoff/xstory\_cloze name: XStoryCloze (hi) config: hi split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 80.61 + task: type: Sentence completion dataset: type: Muennighoff/xstory\_cloze name: XStoryCloze (id) config: id split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 84.25 + task: type: Sentence completion dataset: type: Muennighoff/xstory\_cloze name: XStoryCloze (my) config: my split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 52.55 + task: type: Sentence completion dataset: type: Muennighoff/xstory\_cloze name: XStoryCloze (ru) config: ru split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 65.32 + task: type: Sentence completion dataset: type: Muennighoff/xstory\_cloze name: XStoryCloze (sw) config: sw split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 71.67 + task: type: Sentence completion dataset: type: Muennighoff/xstory\_cloze name: XStoryCloze (te) config: te split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 74.72 + task: type: Sentence completion dataset: type: Muennighoff/xstory\_cloze name: XStoryCloze (zh) config: zh split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 85.37 --- !xmtf Table of Contents ================= 1. Model Summary 2. Use 3. Limitations 4. Training 5. Evaluation 6. Citation Model Summary ============= > > We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages. > > > * Repository: bigscience-workshop/xmtf * Paper: Crosslingual Generalization through Multitask Finetuning * Point of Contact: Niklas Muennighoff * Languages: Refer to bloom for pretraining & xP3 for finetuning language proportions. It understands both pretraining & finetuning languages. * BLOOMZ & mT0 Model Family: Use === Intended use ------------ We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper: * 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评? * Suggest at least five related search terms to "Mạng neural nhân tạo". * Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish): * Explain in a sentence in Telugu what is backpropagation in neural networks. Feel free to share your generations in the Community tab! How to use ---------- ### CPU Click to expand ### GPU Click to expand ### GPU in 8bit Click to expand ### Limitations =========== Prompt Engineering: The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*". Training ======== Model ----- * Architecture: Same as bloom-7b1, also refer to the 'URL' file * Finetuning steps: 1000 * Finetuning tokens: 4.19 billion * Finetuning layout: 1x pipeline parallel, 1x tensor parallel, 64x data parallel * Precision: float16 Hardware -------- * CPUs: AMD CPUs with 512GB memory per node * GPUs: 64 A100 80GB GPUs with 8 GPUs per node (8 nodes) using NVLink 4 inter-gpu connects, 4 OmniPath links * Communication: NCCL-communications network with a fully dedicated subnet Software -------- * Orchestration: Megatron-DeepSpeed * Optimizer & parallelism: DeepSpeed * Neural networks: PyTorch (pytorch-1.11 w/ CUDA-11.5) * FP16 if applicable: apex Evaluation ========== We refer to Table 7 from our paper & bigscience/evaluation-results for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.
[ "### CPU\n\n\n\n Click to expand", "### GPU\n\n\n\n Click to expand", "### GPU in 8bit\n\n\n\n Click to expand", "### \n\n\nLimitations\n===========\n\n\nPrompt Engineering: The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt \"*Translate to English: Je t'aime*\" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. \"*Translate to English: Je t'aime.*\", \"*Translate to English: Je t'aime. Translation:*\" \"*What is \"Je t'aime.\" in English?*\", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. \"*Explain in a sentence in Telugu what is backpropagation in neural networks.*\".\n\n\nTraining\n========\n\n\nModel\n-----\n\n\n* Architecture: Same as bloom-7b1, also refer to the 'URL' file\n* Finetuning steps: 1000\n* Finetuning tokens: 4.19 billion\n* Finetuning layout: 1x pipeline parallel, 1x tensor parallel, 64x data parallel\n* Precision: float16\n\n\nHardware\n--------\n\n\n* CPUs: AMD CPUs with 512GB memory per node\n* GPUs: 64 A100 80GB GPUs with 8 GPUs per node (8 nodes) using NVLink 4 inter-gpu connects, 4 OmniPath links\n* Communication: NCCL-communications network with a fully dedicated subnet\n\n\nSoftware\n--------\n\n\n* Orchestration: Megatron-DeepSpeed\n* Optimizer & parallelism: DeepSpeed\n* Neural networks: PyTorch (pytorch-1.11 w/ CUDA-11.5)\n* FP16 if applicable: apex\n\n\nEvaluation\n==========\n\n\nWe refer to Table 7 from our paper & bigscience/evaluation-results for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config." ]
[ "TAGS\n#transformers #safetensors #bloom #text-generation #arxiv-2211.01786 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "### CPU\n\n\n\n Click to expand", "### GPU\n\n\n\n Click to expand", "### GPU in 8bit\n\n\n\n Click to expand", "### \n\n\nLimitations\n===========\n\n\nPrompt Engineering: The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt \"*Translate to English: Je t'aime*\" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. \"*Translate to English: Je t'aime.*\", \"*Translate to English: Je t'aime. Translation:*\" \"*What is \"Je t'aime.\" in English?*\", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. \"*Explain in a sentence in Telugu what is backpropagation in neural networks.*\".\n\n\nTraining\n========\n\n\nModel\n-----\n\n\n* Architecture: Same as bloom-7b1, also refer to the 'URL' file\n* Finetuning steps: 1000\n* Finetuning tokens: 4.19 billion\n* Finetuning layout: 1x pipeline parallel, 1x tensor parallel, 64x data parallel\n* Precision: float16\n\n\nHardware\n--------\n\n\n* CPUs: AMD CPUs with 512GB memory per node\n* GPUs: 64 A100 80GB GPUs with 8 GPUs per node (8 nodes) using NVLink 4 inter-gpu connects, 4 OmniPath links\n* Communication: NCCL-communications network with a fully dedicated subnet\n\n\nSoftware\n--------\n\n\n* Orchestration: Megatron-DeepSpeed\n* Optimizer & parallelism: DeepSpeed\n* Neural networks: PyTorch (pytorch-1.11 w/ CUDA-11.5)\n* FP16 if applicable: apex\n\n\nEvaluation\n==========\n\n\nWe refer to Table 7 from our paper & bigscience/evaluation-results for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config." ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - embracellm/sushi_LoRA <Gallery /> ## Model description These are embracellm/sushi_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sushi to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](embracellm/sushi_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of sushi", "widget": []}
embracellm/sushi_LoRA
null
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-27T06:32:28+00:00
[]
[]
TAGS #diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
# SDXL LoRA DreamBooth - embracellm/sushi_LoRA <Gallery /> ## Model description These are embracellm/sushi_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sushi to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab. ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# SDXL LoRA DreamBooth - embracellm/sushi_LoRA\n\n<Gallery />", "## Model description\n\nThese are embracellm/sushi_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a photo of sushi to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n", "# SDXL LoRA DreamBooth - embracellm/sushi_LoRA\n\n<Gallery />", "## Model description\n\nThese are embracellm/sushi_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a photo of sushi to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_tata-seqsight_16384_512_22M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset. It achieves the following results on the evaluation set: - Loss: 0.6245 - F1 Score: 0.7798 - Accuracy: 0.7798 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5281 | 5.13 | 200 | 0.4844 | 0.7875 | 0.7879 | | 0.4329 | 10.26 | 400 | 0.4901 | 0.7693 | 0.7700 | | 0.3869 | 15.38 | 600 | 0.4822 | 0.7960 | 0.7961 | | 0.3489 | 20.51 | 800 | 0.4849 | 0.7995 | 0.7993 | | 0.3155 | 25.64 | 1000 | 0.5261 | 0.8043 | 0.8042 | | 0.2837 | 30.77 | 1200 | 0.5394 | 0.8027 | 0.8026 | | 0.2574 | 35.9 | 1400 | 0.5679 | 0.8026 | 0.8026 | | 0.229 | 41.03 | 1600 | 0.5776 | 0.8092 | 0.8091 | | 0.2094 | 46.15 | 1800 | 0.5861 | 0.7928 | 0.7928 | | 0.1835 | 51.28 | 2000 | 0.6079 | 0.8092 | 0.8091 | | 0.1678 | 56.41 | 2200 | 0.6691 | 0.8011 | 0.8010 | | 0.1497 | 61.54 | 2400 | 0.7839 | 0.7742 | 0.7749 | | 0.1367 | 66.67 | 2600 | 0.7662 | 0.7962 | 0.7961 | | 0.1267 | 71.79 | 2800 | 0.7840 | 0.7832 | 0.7830 | | 0.121 | 76.92 | 3000 | 0.8157 | 0.7880 | 0.7879 | | 0.1092 | 82.05 | 3200 | 0.8645 | 0.7864 | 0.7863 | | 0.1085 | 87.18 | 3400 | 0.7989 | 0.7962 | 0.7961 | | 0.0993 | 92.31 | 3600 | 0.8623 | 0.8024 | 0.8026 | | 0.0921 | 97.44 | 3800 | 0.8916 | 0.7895 | 0.7896 | | 0.0861 | 102.56 | 4000 | 0.9362 | 0.7897 | 0.7896 | | 0.0837 | 107.69 | 4200 | 0.9484 | 0.7910 | 0.7912 | | 0.0773 | 112.82 | 4400 | 0.9369 | 0.8011 | 0.8010 | | 0.0721 | 117.95 | 4600 | 0.9656 | 0.7995 | 0.7993 | | 0.0721 | 123.08 | 4800 | 1.0188 | 0.7944 | 0.7945 | | 0.0675 | 128.21 | 5000 | 0.9916 | 0.7978 | 0.7977 | | 0.0659 | 133.33 | 5200 | 0.9771 | 0.8060 | 0.8059 | | 0.0602 | 138.46 | 5400 | 1.0305 | 0.7863 | 0.7863 | | 0.0589 | 143.59 | 5600 | 1.0362 | 0.7979 | 0.7977 | | 0.0583 | 148.72 | 5800 | 1.0196 | 0.7994 | 0.7993 | | 0.055 | 153.85 | 6000 | 1.0837 | 0.8011 | 0.8010 | | 0.0537 | 158.97 | 6200 | 1.1688 | 0.7977 | 0.7977 | | 0.0561 | 164.1 | 6400 | 1.0659 | 0.8060 | 0.8059 | | 0.0508 | 169.23 | 6600 | 1.1277 | 0.7959 | 0.7961 | | 0.05 | 174.36 | 6800 | 1.0920 | 0.7913 | 0.7912 | | 0.0493 | 179.49 | 7000 | 1.0955 | 0.8044 | 0.8042 | | 0.0482 | 184.62 | 7200 | 1.1218 | 0.7978 | 0.7977 | | 0.0462 | 189.74 | 7400 | 1.1239 | 0.7930 | 0.7928 | | 0.0446 | 194.87 | 7600 | 1.1725 | 0.7962 | 0.7961 | | 0.041 | 200.0 | 7800 | 1.2086 | 0.7992 | 0.7993 | | 0.0435 | 205.13 | 8000 | 1.1534 | 0.7962 | 0.7961 | | 0.0435 | 210.26 | 8200 | 1.1784 | 0.8043 | 0.8042 | | 0.0423 | 215.38 | 8400 | 1.1516 | 0.7962 | 0.7961 | | 0.0386 | 220.51 | 8600 | 1.1916 | 0.7929 | 0.7928 | | 0.0407 | 225.64 | 8800 | 1.1814 | 0.7995 | 0.7993 | | 0.0411 | 230.77 | 9000 | 1.1773 | 0.8011 | 0.8010 | | 0.0406 | 235.9 | 9200 | 1.1888 | 0.8011 | 0.8010 | | 0.0369 | 241.03 | 9400 | 1.1865 | 0.8060 | 0.8059 | | 0.0372 | 246.15 | 9600 | 1.1899 | 0.8011 | 0.8010 | | 0.0366 | 251.28 | 9800 | 1.1979 | 0.7995 | 0.7993 | | 0.0375 | 256.41 | 10000 | 1.2061 | 0.7995 | 0.7993 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_16384_512_22M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_16384_512_22M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T06:32:36+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_prom\_prom\_300\_tata-seqsight\_16384\_512\_22M-L32\_f =========================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_tata dataset. It achieves the following results on the evaluation set: * Loss: 0.6245 * F1 Score: 0.7798 * Accuracy: 0.7798 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
swj0419/bbc_retrain_new_STEP0000150
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-27T06:33:57+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
PIXMELT/Qwarte7B-v0.1-dev3-merged
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-27T06:34:19+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_notata-seqsight_16384_512_22M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset. It achieves the following results on the evaluation set: - Loss: 0.1287 - F1 Score: 0.9512 - Accuracy: 0.9512 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.3741 | 0.6 | 200 | 0.1942 | 0.9212 | 0.9212 | | 0.2083 | 1.2 | 400 | 0.1616 | 0.9374 | 0.9374 | | 0.1855 | 1.81 | 600 | 0.1442 | 0.9442 | 0.9442 | | 0.1611 | 2.41 | 800 | 0.1339 | 0.9457 | 0.9457 | | 0.1556 | 3.01 | 1000 | 0.1369 | 0.9454 | 0.9454 | | 0.1454 | 3.61 | 1200 | 0.1297 | 0.9478 | 0.9478 | | 0.1474 | 4.22 | 1400 | 0.1292 | 0.9482 | 0.9482 | | 0.1403 | 4.82 | 1600 | 0.1205 | 0.9525 | 0.9525 | | 0.1363 | 5.42 | 1800 | 0.1262 | 0.9508 | 0.9508 | | 0.1328 | 6.02 | 2000 | 0.1309 | 0.9484 | 0.9484 | | 0.1359 | 6.63 | 2200 | 0.1201 | 0.9518 | 0.9518 | | 0.1316 | 7.23 | 2400 | 0.1174 | 0.9519 | 0.9520 | | 0.1265 | 7.83 | 2600 | 0.1174 | 0.9538 | 0.9538 | | 0.1325 | 8.43 | 2800 | 0.1160 | 0.9538 | 0.9538 | | 0.1287 | 9.04 | 3000 | 0.1138 | 0.9561 | 0.9561 | | 0.1264 | 9.64 | 3200 | 0.1295 | 0.9523 | 0.9523 | | 0.1275 | 10.24 | 3400 | 0.1133 | 0.9555 | 0.9555 | | 0.1265 | 10.84 | 3600 | 0.1142 | 0.9553 | 0.9553 | | 0.1232 | 11.45 | 3800 | 0.1166 | 0.9546 | 0.9546 | | 0.1235 | 12.05 | 4000 | 0.1148 | 0.9544 | 0.9544 | | 0.1242 | 12.65 | 4200 | 0.1169 | 0.9529 | 0.9529 | | 0.1244 | 13.25 | 4400 | 0.1161 | 0.9555 | 0.9555 | | 0.1219 | 13.86 | 4600 | 0.1144 | 0.9542 | 0.9542 | | 0.1231 | 14.46 | 4800 | 0.1146 | 0.9561 | 0.9561 | | 0.1196 | 15.06 | 5000 | 0.1142 | 0.9557 | 0.9557 | | 0.1197 | 15.66 | 5200 | 0.1144 | 0.9561 | 0.9561 | | 0.1212 | 16.27 | 5400 | 0.1137 | 0.9559 | 0.9559 | | 0.1172 | 16.87 | 5600 | 0.1140 | 0.9561 | 0.9561 | | 0.1172 | 17.47 | 5800 | 0.1099 | 0.9567 | 0.9567 | | 0.1221 | 18.07 | 6000 | 0.1106 | 0.9553 | 0.9553 | | 0.1191 | 18.67 | 6200 | 0.1146 | 0.9555 | 0.9555 | | 0.1198 | 19.28 | 6400 | 0.1131 | 0.9561 | 0.9561 | | 0.1167 | 19.88 | 6600 | 0.1117 | 0.9570 | 0.9570 | | 0.1224 | 20.48 | 6800 | 0.1105 | 0.9576 | 0.9576 | | 0.1127 | 21.08 | 7000 | 0.1139 | 0.9561 | 0.9561 | | 0.1165 | 21.69 | 7200 | 0.1134 | 0.9550 | 0.9550 | | 0.1156 | 22.29 | 7400 | 0.1157 | 0.9544 | 0.9544 | | 0.1208 | 22.89 | 7600 | 0.1098 | 0.9563 | 0.9563 | | 0.1155 | 23.49 | 7800 | 0.1112 | 0.9567 | 0.9567 | | 0.1153 | 24.1 | 8000 | 0.1117 | 0.9567 | 0.9567 | | 0.1164 | 24.7 | 8200 | 0.1130 | 0.9567 | 0.9567 | | 0.117 | 25.3 | 8400 | 0.1115 | 0.9563 | 0.9563 | | 0.1149 | 25.9 | 8600 | 0.1107 | 0.9559 | 0.9559 | | 0.1163 | 26.51 | 8800 | 0.1107 | 0.9568 | 0.9568 | | 0.1155 | 27.11 | 9000 | 0.1109 | 0.9570 | 0.9570 | | 0.1152 | 27.71 | 9200 | 0.1108 | 0.9567 | 0.9567 | | 0.1142 | 28.31 | 9400 | 0.1098 | 0.9567 | 0.9567 | | 0.1192 | 28.92 | 9600 | 0.1112 | 0.9567 | 0.9567 | | 0.1124 | 29.52 | 9800 | 0.1106 | 0.9567 | 0.9567 | | 0.1154 | 30.12 | 10000 | 0.1108 | 0.9567 | 0.9567 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_16384_512_22M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_16384_512_22M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-27T06:35:04+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_prom\_prom\_300\_notata-seqsight\_16384\_512\_22M-L1\_f ============================================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_notata dataset. It achieves the following results on the evaluation set: * Loss: 0.1287 * F1 Score: 0.9512 * Accuracy: 0.9512 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/yxng8im
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:35:25+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/55p1wba
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:35:25+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/9cd8j0p
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-27T06:35:25+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]